Table of Contents
A low-level client representing Amazon Comprehend:
import boto3
client = boto3.client('comprehend')
These are the available methods:
Determines the dominant language of the input text for a batch of documents. For a list of languages that Amazon Comprehend can detect, see Amazon Comprehend Supported Languages .
See also: AWS API Documentation
Request Syntax
response = client.batch_detect_dominant_language(
TextList=[
'string',
]
)
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document should contain at least 20 characters and must contain fewer than 5,000 bytes of UTF-8 encoded characters.
{
'ResultList': [
{
'Index': 123,
'Languages': [
{
'LanguageCode': 'string',
'Score': ...
},
]
},
],
'ErrorList': [
{
'Index': 123,
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
}
Response Structure
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
The zero-based index of the document in the input list.
One or more DominantLanguage objects describing the dominant languages in the document.
Returns the code for the dominant language in the input text and the level of confidence that Amazon Comprehend has in the accuracy of the detection.
The RFC 5646 language code for the dominant language. For more information about RFC 5646, see Tags for Identifying Languages on the IETF Tools web site.
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
The zero-based index of the document in the input list.
The numeric error code of the error.
A text description of the error.
Inspects the text of a batch of documents for named entities and returns information about them. For more information about named entities, see how-entities
See also: AWS API Documentation
Request Syntax
response = client.batch_detect_entities(
TextList=[
'string',
],
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document must contain fewer than 5,000 bytes of UTF-8 encoded characters.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'ResultList': [
{
'Index': 123,
'Entities': [
{
'Score': ...,
'Type': 'PERSON'|'LOCATION'|'ORGANIZATION'|'COMMERCIAL_ITEM'|'EVENT'|'DATE'|'QUANTITY'|'TITLE'|'OTHER',
'Text': 'string',
'BeginOffset': 123,
'EndOffset': 123
},
]
},
],
'ErrorList': [
{
'Index': 123,
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
}
Response Structure
(dict) --
ResultList (list) --
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
(dict) --
The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
Index (integer) --
The zero-based index of the document in the input list.
Entities (list) --
One or more Entity objects, one for each entity detected in the document.
(dict) --
Provides information about an entity.
Score (float) --
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
Type (string) --
The entity's type.
Text (string) --
The text of the entity.
BeginOffset (integer) --
A character offset in the input text that shows where the entity begins (the first character is at position 0). The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
EndOffset (integer) --
A character offset in the input text that shows where the entity ends. The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
ErrorList (list) --
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
(dict) --
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
Index (integer) --
The zero-based index of the document in the input list.
ErrorCode (string) --
The numeric error code of the error.
ErrorMessage (string) --
A text description of the error.
Detects the key noun phrases found in a batch of documents.
See also: AWS API Documentation
Request Syntax
response = client.batch_detect_key_phrases(
TextList=[
'string',
],
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document must contain fewer that 5,000 bytes of UTF-8 encoded characters.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'ResultList': [
{
'Index': 123,
'KeyPhrases': [
{
'Score': ...,
'Text': 'string',
'BeginOffset': 123,
'EndOffset': 123
},
]
},
],
'ErrorList': [
{
'Index': 123,
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
}
Response Structure
(dict) --
ResultList (list) --
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
(dict) --
The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
Index (integer) --
The zero-based index of the document in the input list.
KeyPhrases (list) --
One or more KeyPhrase objects, one for each key phrase detected in the document.
(dict) --
Describes a key noun phrase.
Score (float) --
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
Text (string) --
The text of a key noun phrase.
BeginOffset (integer) --
A character offset in the input text that shows where the key phrase begins (the first character is at position 0). The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
EndOffset (integer) --
A character offset in the input text where the key phrase ends. The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
ErrorList (list) --
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
(dict) --
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
Index (integer) --
The zero-based index of the document in the input list.
ErrorCode (string) --
The numeric error code of the error.
ErrorMessage (string) --
A text description of the error.
Inspects a batch of documents and returns an inference of the prevailing sentiment, POSITIVE , NEUTRAL , MIXED , or NEGATIVE , in each one.
See also: AWS API Documentation
Request Syntax
response = client.batch_detect_sentiment(
TextList=[
'string',
],
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document must contain fewer that 5,000 bytes of UTF-8 encoded characters.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'ResultList': [
{
'Index': 123,
'Sentiment': 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED',
'SentimentScore': {
'Positive': ...,
'Negative': ...,
'Neutral': ...,
'Mixed': ...
}
},
],
'ErrorList': [
{
'Index': 123,
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
}
Response Structure
(dict) --
ResultList (list) --
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
(dict) --
The result of calling the operation. The operation returns one object for each document that is successfully processed by the operation.
Index (integer) --
The zero-based index of the document in the input list.
Sentiment (string) --
The sentiment detected in the document.
SentimentScore (dict) --
The level of confidence that Amazon Comprehend has in the accuracy of its sentiment detection.
Positive (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the POSITIVE sentiment.
Negative (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEGATIVE sentiment.
Neutral (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEUTRAL sentiment.
Mixed (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the MIXED sentiment.
ErrorList (list) --
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
(dict) --
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
Index (integer) --
The zero-based index of the document in the input list.
ErrorCode (string) --
The numeric error code of the error.
ErrorMessage (string) --
A text description of the error.
Inspects the text of a batch of documents for the syntax and part of speech of the words in the document and returns information about them. For more information, see how-syntax .
See also: AWS API Documentation
Request Syntax
response = client.batch_detect_syntax(
TextList=[
'string',
],
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A list containing the text of the input documents. The list can contain a maximum of 25 documents. Each document must contain fewer that 5,000 bytes of UTF-8 encoded characters.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'ResultList': [
{
'Index': 123,
'SyntaxTokens': [
{
'TokenId': 123,
'Text': 'string',
'BeginOffset': 123,
'EndOffset': 123,
'PartOfSpeech': {
'Tag': 'ADJ'|'ADP'|'ADV'|'AUX'|'CONJ'|'CCONJ'|'DET'|'INTJ'|'NOUN'|'NUM'|'O'|'PART'|'PRON'|'PROPN'|'PUNCT'|'SCONJ'|'SYM'|'VERB',
'Score': ...
}
},
]
},
],
'ErrorList': [
{
'Index': 123,
'ErrorCode': 'string',
'ErrorMessage': 'string'
},
]
}
Response Structure
(dict) --
ResultList (list) --
A list of objects containing the results of the operation. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If all of the documents contain an error, the ResultList is empty.
(dict) --
The result of calling the operation. The operation returns one object that is successfully processed by the operation.
Index (integer) --
The zero-based index of the document in the input list.
SyntaxTokens (list) --
The syntax tokens for the words in the document, one token for each word.
(dict) --
Represents a work in the input text that was recognized and assigned a part of speech. There is one syntax token record for each word in the source text.
TokenId (integer) --
A unique identifier for a token.
Text (string) --
The word that was recognized in the source text.
BeginOffset (integer) --
The zero-based offset from the beginning of the source text to the first character in the word.
EndOffset (integer) --
The zero-based offset from the beginning of the source text to the last character in the word.
PartOfSpeech (dict) --
Provides the part of speech label and the confidence level that Amazon Comprehend has that the part of speech was correctly identified. For more information, see how-syntax .
Tag (string) --
Identifies the part of speech that the token represents.
Score (float) --
The confidence that Amazon Comprehend has that the part of speech was correctly identified.
ErrorList (list) --
A list containing one object for each document that contained an error. The results are sorted in ascending order by the Index field and match the order of the documents in the input list. If there are no errors in the batch, the ErrorList is empty.
(dict) --
Describes an error that occurred while processing a document in a batch. The operation returns on BatchItemError object for each document that contained an error.
Index (integer) --
The zero-based index of the document in the input list.
ErrorCode (string) --
The numeric error code of the error.
ErrorMessage (string) --
A text description of the error.
Check if an operation can be paginated.
Creates a new document classifier that you can use to categorize documents. To create a classifier you provide a set of training documents that labeled with the categories that you want to use. After the classifier is trained you can use it to categorize a set of labeled documents into the categories. For more information, see how-document-classification .
See also: AWS API Documentation
Request Syntax
response = client.create_document_classifier(
DocumentClassifierName='string',
DataAccessRoleArn='string',
InputDataConfig={
'S3Uri': 'string'
},
ClientRequestToken='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
The name of the document classifier.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
[REQUIRED]
Specifies the format and location of the input data for the job.
The Amazon S3 URI for the input data. The S3 bucket must be in the same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of input files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'DocumentClassifierArn': 'string'
}
Response Structure
(dict) --
DocumentClassifierArn (string) --
The Amazon Resource Name (ARN) that identifies the document classifier.
Creates an entity recognizer using submitted files. After your CreateEntityRecognizer request is submitted, you can check job status using the API.
See also: AWS API Documentation
Request Syntax
response = client.create_entity_recognizer(
RecognizerName='string',
DataAccessRoleArn='string',
InputDataConfig={
'EntityTypes': [
{
'Type': 'string'
},
],
'Documents': {
'S3Uri': 'string'
},
'Annotations': {
'S3Uri': 'string'
},
'EntityList': {
'S3Uri': 'string'
}
},
ClientRequestToken='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
The name given to the newly created recognizer. Recognizer names can be a maximum of 256 characters. Alphanumeric characters, hyphens (-) and underscores (_) are allowed. The name must be unique in the account/region.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
[REQUIRED]
Specifies the format and location of the input data. The S3 bucket containing the input data must be located in the same region as the entity recognizer being created.
The entity types in the input data for an entity recognizer.
Information about an individual item on a list of entity types.
Entity type of an item on an entity type list.
S3 location of the documents folder for an entity recognizer
Specifies the Amazon S3 location where the training documents for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
S3 location of the annotations file for an entity recognizer.
Specifies the Amazon S3 location where the annotations for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
S3 location of the entity list for an entity recognizer.
Specifies the Amazon S3 location where the entity list is located. The URI must be in the same region as the API endpoint that you are calling.
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
[REQUIRED]
The language of the input documents. All documents must be in the same language. Only English ("en") is currently supported.
dict
Response Syntax
{
'EntityRecognizerArn': 'string'
}
Response Structure
(dict) --
EntityRecognizerArn (string) --
The Amazon Resource Name (ARN) that identifies the entity recognizer.
Deletes a previously created document classifier
Only those classifiers that are in terminated states (IN_ERROR, TRAINED) will be deleted. If an active inference job is using the model, a ResourceInUseException will be returned.
This is an asynchronous action that puts the classifier into a DELETING state, and it is then removed by a background job. Once removed, the classifier disappears from your account and is no longer available for use.
See also: AWS API Documentation
Request Syntax
response = client.delete_document_classifier(
DocumentClassifierArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) that identifies the document classifier.
{}
Response Structure
Deletes an entity recognizer.
Only those recognizers that are in terminated states (IN_ERROR, TRAINED) will be deleted. If an active inference job is using the model, a ResourceInUseException will be returned.
This is an asynchronous action that puts the recognizer into a DELETING state, and it is then removed by a background job. Once removed, the recognizer disappears from your account and is no longer available for use.
See also: AWS API Documentation
Request Syntax
response = client.delete_entity_recognizer(
EntityRecognizerArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) that identifies the entity recognizer.
{}
Response Structure
Gets the properties associated with a document classification job. Use this operation to get the status of a classification job.
See also: AWS API Documentation
Request Syntax
response = client.describe_document_classification_job(
JobId='string'
)
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
{
'DocumentClassificationJobProperties': {
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'DocumentClassifierArn': 'string',
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'DataAccessRoleArn': 'string'
}
}
Response Structure
An object that describes the properties associated with the document classification job.
The identifier assigned to the document classification job.
The name that you assigned to the document classification job.
The current status of the document classification job. If the status is FAILED , the Message field shows the reason for the failure.
A description of the status of the job.
The time that the document classification job was submitted for processing.
The time that the document classification job completed.
The Amazon Resource Name (ARN) that identifies the document classifier.
The input data configuration that you supplied when you created the document classification job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
The output data configuration that you supplied when you created the document classification job.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
The Amazon Resource Name (ARN) of the AWS identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data.
Gets the properties associated with a document classifier.
See also: AWS API Documentation
Request Syntax
response = client.describe_document_classifier(
DocumentClassifierArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) that identifies the document classifier. The operation returns this identifier in its response.
{
'DocumentClassifierProperties': {
'DocumentClassifierArn': 'string',
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'TrainingStartTime': datetime(2015, 1, 1),
'TrainingEndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string'
},
'ClassifierMetadata': {
'NumberOfLabels': 123,
'NumberOfTrainedDocuments': 123,
'NumberOfTestDocuments': 123,
'EvaluationMetrics': {
'Accuracy': 123.0,
'Precision': 123.0,
'Recall': 123.0,
'F1Score': 123.0
}
},
'DataAccessRoleArn': 'string'
}
}
Response Structure
An object that contains the properties associated with a document classifier.
The Amazon Resource Name (ARN) that identifies the document classifier.
The language code for the language of the documents that the classifier was trained on.
The status of the document classifier. If the status is TRAINED the classifier is ready to use. If the status is FAILED you can see additional information about why the classifier wasn't trained in the Message field.
Additional information about the status of the classifier.
The time that the document classifier was submitted for training.
The time that training the document classifier completed.
Indicates the time when the training starts on documentation classifiers. You are billed for the time interval between this time and the value of TrainingEndTime.
The time that training of the document classifier was completed. Indicates the time when the training completes on documentation classifiers. You are billed for the time interval between this time and the value of TrainingStartTime.
The input data configuration that you supplied when you created the document classifier for training.
The Amazon S3 URI for the input data. The S3 bucket must be in the same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of input files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Information about the document classifier, including the number of documents used for training the classifier, the number of documents used for test the classifier, and an accuracy rating.
The number of labels in the input data.
The number of documents in the input data that were used to train the classifier. Typically this is 80 to 90 percent of the input documents.
The number of documents in the input data that were used to test the classifier. Typically this is 10 to 20 percent of the input documents.
Describes the result metrics for the test data associated with an documentation classifier.
The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.
A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.
A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.
A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
Gets the properties associated with a dominant language detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
response = client.describe_dominant_language_detection_job(
JobId='string'
)
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
{
'DominantLanguageDetectionJobProperties': {
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'DataAccessRoleArn': 'string'
}
}
Response Structure
An object that contains the properties associated with a dominant language detection job.
The identifier assigned to the dominant language detection job.
The name that you assigned to the dominant language detection job.
The current status of the dominant language detection job. If the status is FAILED , the Message field shows the reason for the failure.
A description for the status of a job.
The time that the dominant language detection job was submitted for processing.
The time that the dominant language detection job completed.
The input data configuration that you supplied when you created the dominant language detection job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
The output data configuration that you supplied when you created the dominant language detection job.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
Gets the properties associated with an entities detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
response = client.describe_entities_detection_job(
JobId='string'
)
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
{
'EntitiesDetectionJobProperties': {
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'EntityRecognizerArn': 'string',
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
}
}
Response Structure
An object that contains the properties associated with an entities detection job.
The identifier assigned to the entities detection job.
The name that you assigned the entities detection job.
The current status of the entities detection job. If the status is FAILED , the Message field shows the reason for the failure.
A description of the status of a job.
The time that the entities detection job was submitted for processing.
The time that the entities detection job completed
The Amazon Resource Name (ARN) that identifies the entity recognizer.
The input data configuration that you supplied when you created the entities detection job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
The output data configuration that you supplied when you created the entities detection job.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
The language code of the input documents.
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
Provides details about an entity recognizer including status, S3 buckets containing training data, recognizer metadata, metrics, and so on.
See also: AWS API Documentation
Request Syntax
response = client.describe_entity_recognizer(
EntityRecognizerArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) that identifies the entity recognizer.
{
'EntityRecognizerProperties': {
'EntityRecognizerArn': 'string',
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'TrainingStartTime': datetime(2015, 1, 1),
'TrainingEndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'EntityTypes': [
{
'Type': 'string'
},
],
'Documents': {
'S3Uri': 'string'
},
'Annotations': {
'S3Uri': 'string'
},
'EntityList': {
'S3Uri': 'string'
}
},
'RecognizerMetadata': {
'NumberOfTrainedDocuments': 123,
'NumberOfTestDocuments': 123,
'EvaluationMetrics': {
'Precision': 123.0,
'Recall': 123.0,
'F1Score': 123.0
},
'EntityTypes': [
{
'Type': 'string'
},
]
},
'DataAccessRoleArn': 'string'
}
}
Response Structure
Describes information associated with an entity recognizer.
The Amazon Resource Name (ARN) that identifies the entity recognizer.
The language of the input documents. All documents must be in the same language. Only English ("en") is currently supported.
Provides the status of the entity recognizer.
A description of the status of the recognizer.
The time that the recognizer was submitted for processing.
The time that the recognizer creation completed.
The time that training of the entity recognizer started.
The time that training of the entity recognizer was completed.
The input data properties of an entity recognizer.
The entity types in the input data for an entity recognizer.
Information about an individual item on a list of entity types.
Entity type of an item on an entity type list.
S3 location of the documents folder for an entity recognizer
Specifies the Amazon S3 location where the training documents for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
S3 location of the annotations file for an entity recognizer.
Specifies the Amazon S3 location where the annotations for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
S3 location of the entity list for an entity recognizer.
Specifies the Amazon S3 location where the entity list is located. The URI must be in the same region as the API endpoint that you are calling.
Provides information about an entity recognizer.
The number of documents in the input data that were used to train the entity recognizer. Typically this is 80 to 90 percent of the input documents.
The number of documents in the input data that were used to test the entity recognizer. Typically this is 10 to 20 percent of the input documents.
Detailed information about the accuracy of an entity recognizer.
A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.
A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.
A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
Entity types from the metadata of an entity recognizer.
Individual item from the list of entity types in the metadata of an entity recognizer.
Type of entity from the list of entity types in the metadata of an entity recognizer.
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
Gets the properties associated with a key phrases detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
response = client.describe_key_phrases_detection_job(
JobId='string'
)
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
{
'KeyPhrasesDetectionJobProperties': {
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
}
}
Response Structure
An object that contains the properties associated with a key phrases detection job.
The identifier assigned to the key phrases detection job.
The name that you assigned the key phrases detection job.
The current status of the key phrases detection job. If the status is FAILED , the Message field shows the reason for the failure.
A description of the status of a job.
The time that the key phrases detection job was submitted for processing.
The time that the key phrases detection job completed.
The input data configuration that you supplied when you created the key phrases detection job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
The output data configuration that you supplied when you created the key phrases detection job.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
The language code of the input documents.
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
Gets the properties associated with a sentiment detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
response = client.describe_sentiment_detection_job(
JobId='string'
)
[REQUIRED]
The identifier that Amazon Comprehend generated for the job. The operation returns this identifier in its response.
{
'SentimentDetectionJobProperties': {
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
}
}
Response Structure
An object that contains the properties associated with a sentiment detection job.
The identifier assigned to the sentiment detection job.
The name that you assigned to the sentiment detection job
The current status of the sentiment detection job. If the status is FAILED , the Messages field shows the reason for the failure.
A description of the status of a job.
The time that the sentiment detection job was submitted for processing.
The time that the sentiment detection job ended.
The input data configuration that you supplied when you created the sentiment detection job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
The output data configuration that you supplied when you created the sentiment detection job.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
The language code of the input documents.
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
Gets the properties associated with a topic detection job. Use this operation to get the status of a detection job.
See also: AWS API Documentation
Request Syntax
response = client.describe_topics_detection_job(
JobId='string'
)
[REQUIRED]
The identifier assigned by the user to the detection job.
{
'TopicsDetectionJobProperties': {
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'NumberOfTopics': 123
}
}
Response Structure
The list of properties for the requested job.
The identifier assigned to the topic detection job.
The name of the topic detection job.
The current status of the topic detection job. If the status is Failed , the reason for the failure is shown in the Message field.
A description for the status of a job.
The time that the topic detection job was submitted for processing.
The time that the topic detection job was completed.
The input data configuration supplied when you created the topic detection job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
The output data configuration supplied when you created the topic detection job.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
The number of topics to detect supplied when you created the topic detection job. The default is 10.
Determines the dominant language of the input text. For a list of languages that Amazon Comprehend can detect, see Amazon Comprehend Supported Languages .
See also: AWS API Documentation
Request Syntax
response = client.detect_dominant_language(
Text='string'
)
[REQUIRED]
A UTF-8 text string. Each string should contain at least 20 characters and must contain fewer that 5,000 bytes of UTF-8 encoded characters.
{
'Languages': [
{
'LanguageCode': 'string',
'Score': ...
},
]
}
Response Structure
The languages that Amazon Comprehend detected in the input text. For each language, the response returns the RFC 5646 language code and the level of confidence that Amazon Comprehend has in the accuracy of its inference. For more information about RFC 5646, see Tags for Identifying Languages on the IETF Tools web site.
Returns the code for the dominant language in the input text and the level of confidence that Amazon Comprehend has in the accuracy of the detection.
The RFC 5646 language code for the dominant language. For more information about RFC 5646, see Tags for Identifying Languages on the IETF Tools web site.
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
Inspects text for named entities, and returns information about them. For more information, about named entities, see how-entities .
See also: AWS API Documentation
Request Syntax
response = client.detect_entities(
Text='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A UTF-8 text string. Each string must contain fewer that 5,000 bytes of UTF-8 encoded characters.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'Entities': [
{
'Score': ...,
'Type': 'PERSON'|'LOCATION'|'ORGANIZATION'|'COMMERCIAL_ITEM'|'EVENT'|'DATE'|'QUANTITY'|'TITLE'|'OTHER',
'Text': 'string',
'BeginOffset': 123,
'EndOffset': 123
},
]
}
Response Structure
(dict) --
Entities (list) --
A collection of entities identified in the input text. For each entity, the response provides the entity text, entity type, where the entity text begins and ends, and the level of confidence that Amazon Comprehend has in the detection. For a list of entity types, see how-entities .
(dict) --
Provides information about an entity.
Score (float) --
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
Type (string) --
The entity's type.
Text (string) --
The text of the entity.
BeginOffset (integer) --
A character offset in the input text that shows where the entity begins (the first character is at position 0). The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
EndOffset (integer) --
A character offset in the input text that shows where the entity ends. The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
Detects the key noun phrases found in the text.
See also: AWS API Documentation
Request Syntax
response = client.detect_key_phrases(
Text='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A UTF-8 text string. Each string must contain fewer that 5,000 bytes of UTF-8 encoded characters.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'KeyPhrases': [
{
'Score': ...,
'Text': 'string',
'BeginOffset': 123,
'EndOffset': 123
},
]
}
Response Structure
(dict) --
KeyPhrases (list) --
A collection of key phrases that Amazon Comprehend identified in the input text. For each key phrase, the response provides the text of the key phrase, where the key phrase begins and ends, and the level of confidence that Amazon Comprehend has in the accuracy of the detection.
(dict) --
Describes a key noun phrase.
Score (float) --
The level of confidence that Amazon Comprehend has in the accuracy of the detection.
Text (string) --
The text of a key noun phrase.
BeginOffset (integer) --
A character offset in the input text that shows where the key phrase begins (the first character is at position 0). The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
EndOffset (integer) --
A character offset in the input text where the key phrase ends. The offset returns the position of each UTF-8 code point in the string. A code point is the abstract character from a particular graphical representation. For example, a multi-byte UTF-8 character maps to a single code point.
Inspects text and returns an inference of the prevailing sentiment (POSITIVE , NEUTRAL , MIXED , or NEGATIVE ).
See also: AWS API Documentation
Request Syntax
response = client.detect_sentiment(
Text='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A UTF-8 text string. Each string must contain fewer that 5,000 bytes of UTF-8 encoded characters.
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
dict
Response Syntax
{
'Sentiment': 'POSITIVE'|'NEGATIVE'|'NEUTRAL'|'MIXED',
'SentimentScore': {
'Positive': ...,
'Negative': ...,
'Neutral': ...,
'Mixed': ...
}
}
Response Structure
(dict) --
Sentiment (string) --
The inferred sentiment that Amazon Comprehend has the highest level of confidence in.
SentimentScore (dict) --
An object that lists the sentiments, and their corresponding confidence levels.
Positive (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the POSITIVE sentiment.
Negative (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEGATIVE sentiment.
Neutral (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the NEUTRAL sentiment.
Mixed (float) --
The level of confidence that Amazon Comprehend has in the accuracy of its detection of the MIXED sentiment.
Inspects text for syntax and the part of speech of words in the document. For more information, how-syntax .
See also: AWS API Documentation
Request Syntax
response = client.detect_syntax(
Text='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt'
)
[REQUIRED]
A UTF-8 string. Each string must contain fewer that 5,000 bytes of UTF encoded characters.
[REQUIRED]
The language code of the input documents. You can specify English ("en") or Spanish ("es").
dict
Response Syntax
{
'SyntaxTokens': [
{
'TokenId': 123,
'Text': 'string',
'BeginOffset': 123,
'EndOffset': 123,
'PartOfSpeech': {
'Tag': 'ADJ'|'ADP'|'ADV'|'AUX'|'CONJ'|'CCONJ'|'DET'|'INTJ'|'NOUN'|'NUM'|'O'|'PART'|'PRON'|'PROPN'|'PUNCT'|'SCONJ'|'SYM'|'VERB',
'Score': ...
}
},
]
}
Response Structure
(dict) --
SyntaxTokens (list) --
A collection of syntax tokens describing the text. For each token, the response provides the text, the token type, where the text begins and ends, and the level of confidence that Amazon Comprehend has that the token is correct. For a list of token types, see how-syntax .
(dict) --
Represents a work in the input text that was recognized and assigned a part of speech. There is one syntax token record for each word in the source text.
TokenId (integer) --
A unique identifier for a token.
Text (string) --
The word that was recognized in the source text.
BeginOffset (integer) --
The zero-based offset from the beginning of the source text to the first character in the word.
EndOffset (integer) --
The zero-based offset from the beginning of the source text to the last character in the word.
PartOfSpeech (dict) --
Provides the part of speech label and the confidence level that Amazon Comprehend has that the part of speech was correctly identified. For more information, see how-syntax .
Tag (string) --
Identifies the part of speech that the token represents.
Score (float) --
The confidence that Amazon Comprehend has that the part of speech was correctly identified.
Generate a presigned url given a client, its method, and arguments
The presigned url
Create a paginator for an operation.
Returns an object that can wait for some condition.
Gets a list of the documentation classification jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
response = client.list_document_classification_jobs(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters the jobs that are returned. You can filter jobs on their names, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
dict
Response Syntax
{
'DocumentClassificationJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'DocumentClassifierArn': 'string',
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'DataAccessRoleArn': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
DocumentClassificationJobPropertiesList (list) --
A list containing the properties of each job returned.
(dict) --
Provides information about a document classification job.
JobId (string) --
The identifier assigned to the document classification job.
JobName (string) --
The name that you assigned to the document classification job.
JobStatus (string) --
The current status of the document classification job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of the job.
SubmitTime (datetime) --
The time that the document classification job was submitted for processing.
EndTime (datetime) --
The time that the document classification job completed.
DocumentClassifierArn (string) --
The Amazon Resource Name (ARN) that identifies the document classifier.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the document classification job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the document classification job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the AWS identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data.
NextToken (string) --
Identifies the next page of results to return.
Gets a list of the document classifiers that you have created.
See also: AWS API Documentation
Request Syntax
response = client.list_document_classifiers(
Filter={
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters the list of classifiers based on status.
Filters the list of classifiers based on the time that the classifier was submitted for processing. Returns only classifiers submitted before the specified time. Classifiers are returned in ascending order, oldest to newest.
Filters the list of classifiers based on the time that the classifier was submitted for processing. Returns only classifiers submitted after the specified time. Classifiers are returned in descending order, newest to oldest.
dict
Response Syntax
{
'DocumentClassifierPropertiesList': [
{
'DocumentClassifierArn': 'string',
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'TrainingStartTime': datetime(2015, 1, 1),
'TrainingEndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string'
},
'ClassifierMetadata': {
'NumberOfLabels': 123,
'NumberOfTrainedDocuments': 123,
'NumberOfTestDocuments': 123,
'EvaluationMetrics': {
'Accuracy': 123.0,
'Precision': 123.0,
'Recall': 123.0,
'F1Score': 123.0
}
},
'DataAccessRoleArn': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
DocumentClassifierPropertiesList (list) --
A list containing the properties of each job returned.
(dict) --
Provides information about a document classifier.
DocumentClassifierArn (string) --
The Amazon Resource Name (ARN) that identifies the document classifier.
LanguageCode (string) --
The language code for the language of the documents that the classifier was trained on.
Status (string) --
The status of the document classifier. If the status is TRAINED the classifier is ready to use. If the status is FAILED you can see additional information about why the classifier wasn't trained in the Message field.
Message (string) --
Additional information about the status of the classifier.
SubmitTime (datetime) --
The time that the document classifier was submitted for training.
EndTime (datetime) --
The time that training the document classifier completed.
TrainingStartTime (datetime) --
Indicates the time when the training starts on documentation classifiers. You are billed for the time interval between this time and the value of TrainingEndTime.
TrainingEndTime (datetime) --
The time that training of the document classifier was completed. Indicates the time when the training completes on documentation classifiers. You are billed for the time interval between this time and the value of TrainingStartTime.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the document classifier for training.
S3Uri (string) --
The Amazon S3 URI for the input data. The S3 bucket must be in the same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of input files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
ClassifierMetadata (dict) --
Information about the document classifier, including the number of documents used for training the classifier, the number of documents used for test the classifier, and an accuracy rating.
NumberOfLabels (integer) --
The number of labels in the input data.
NumberOfTrainedDocuments (integer) --
The number of documents in the input data that were used to train the classifier. Typically this is 80 to 90 percent of the input documents.
NumberOfTestDocuments (integer) --
The number of documents in the input data that were used to test the classifier. Typically this is 10 to 20 percent of the input documents.
EvaluationMetrics (dict) --
Describes the result metrics for the test data associated with an documentation classifier.
Accuracy (float) --
The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.
Precision (float) --
A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.
Recall (float) --
A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.
F1Score (float) --
A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
NextToken (string) --
Identifies the next page of results to return.
Gets a list of the dominant language detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
response = client.list_dominant_language_detection_jobs(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters that jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
dict
Response Syntax
{
'DominantLanguageDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'DataAccessRoleArn': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
DominantLanguageDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a dominant language detection job.
JobId (string) --
The identifier assigned to the dominant language detection job.
JobName (string) --
The name that you assigned to the dominant language detection job.
JobStatus (string) --
The current status of the dominant language detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the dominant language detection job was submitted for processing.
EndTime (datetime) --
The time that the dominant language detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
NextToken (string) --
Identifies the next page of results to return.
Gets a list of the entity detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
response = client.list_entities_detection_jobs(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
dict
Response Syntax
{
'EntitiesDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'EntityRecognizerArn': 'string',
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
EntitiesDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about an entities detection job.
JobId (string) --
The identifier assigned to the entities detection job.
JobName (string) --
The name that you assigned the entities detection job.
JobStatus (string) --
The current status of the entities detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the entities detection job was submitted for processing.
EndTime (datetime) --
The time that the entities detection job completed
EntityRecognizerArn (string) --
The Amazon Resource Name (ARN) that identifies the entity recognizer.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
NextToken (string) --
Identifies the next page of results to return.
Gets a list of the properties of all entity recognizers that you created, including recognizers currently in training. Allows you to filter the list of recognizers based on criteria such as status and submission time. This call returns up to 500 entity recognizers in the list, with a default number of 100 recognizers in the list.
The results of this list are not in any particular order. Please get the list and sort locally if needed.
See also: AWS API Documentation
Request Syntax
response = client.list_entity_recognizers(
Filter={
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters the list of entities returned. You can filter on Status , SubmitTimeBefore , or SubmitTimeAfter . You can only set one filter at a time.
The status of an entity recognizer.
Filters the list of entities based on the time that the list was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
Filters the list of entities based on the time that the list was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
dict
Response Syntax
{
'EntityRecognizerPropertiesList': [
{
'EntityRecognizerArn': 'string',
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'TrainingStartTime': datetime(2015, 1, 1),
'TrainingEndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'EntityTypes': [
{
'Type': 'string'
},
],
'Documents': {
'S3Uri': 'string'
},
'Annotations': {
'S3Uri': 'string'
},
'EntityList': {
'S3Uri': 'string'
}
},
'RecognizerMetadata': {
'NumberOfTrainedDocuments': 123,
'NumberOfTestDocuments': 123,
'EvaluationMetrics': {
'Precision': 123.0,
'Recall': 123.0,
'F1Score': 123.0
},
'EntityTypes': [
{
'Type': 'string'
},
]
},
'DataAccessRoleArn': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
EntityRecognizerPropertiesList (list) --
The list of properties of an entity recognizer.
(dict) --
Describes information about an entity recognizer.
EntityRecognizerArn (string) --
The Amazon Resource Name (ARN) that identifies the entity recognizer.
LanguageCode (string) --
The language of the input documents. All documents must be in the same language. Only English ("en") is currently supported.
Status (string) --
Provides the status of the entity recognizer.
Message (string) --
A description of the status of the recognizer.
SubmitTime (datetime) --
The time that the recognizer was submitted for processing.
EndTime (datetime) --
The time that the recognizer creation completed.
TrainingStartTime (datetime) --
The time that training of the entity recognizer started.
TrainingEndTime (datetime) --
The time that training of the entity recognizer was completed.
InputDataConfig (dict) --
The input data properties of an entity recognizer.
EntityTypes (list) --
The entity types in the input data for an entity recognizer.
(dict) --
Information about an individual item on a list of entity types.
Type (string) --
Entity type of an item on an entity type list.
Documents (dict) --
S3 location of the documents folder for an entity recognizer
S3Uri (string) --
Specifies the Amazon S3 location where the training documents for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
Annotations (dict) --
S3 location of the annotations file for an entity recognizer.
S3Uri (string) --
Specifies the Amazon S3 location where the annotations for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
EntityList (dict) --
S3 location of the entity list for an entity recognizer.
S3Uri (string) --
Specifies the Amazon S3 location where the entity list is located. The URI must be in the same region as the API endpoint that you are calling.
RecognizerMetadata (dict) --
Provides information about an entity recognizer.
NumberOfTrainedDocuments (integer) --
The number of documents in the input data that were used to train the entity recognizer. Typically this is 80 to 90 percent of the input documents.
NumberOfTestDocuments (integer) --
The number of documents in the input data that were used to test the entity recognizer. Typically this is 10 to 20 percent of the input documents.
EvaluationMetrics (dict) --
Detailed information about the accuracy of an entity recognizer.
Precision (float) --
A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.
Recall (float) --
A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.
F1Score (float) --
A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
EntityTypes (list) --
Entity types from the metadata of an entity recognizer.
(dict) --
Individual item from the list of entity types in the metadata of an entity recognizer.
Type (string) --
Type of entity from the list of entity types in the metadata of an entity recognizer.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
NextToken (string) --
Identifies the next page of results to return.
Get a list of key phrase detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
response = client.list_key_phrases_detection_jobs(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
dict
Response Syntax
{
'KeyPhrasesDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
KeyPhrasesDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a key phrases detection job.
JobId (string) --
The identifier assigned to the key phrases detection job.
JobName (string) --
The name that you assigned the key phrases detection job.
JobStatus (string) --
The current status of the key phrases detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the key phrases detection job was submitted for processing.
EndTime (datetime) --
The time that the key phrases detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
NextToken (string) --
Identifies the next page of results to return.
Gets a list of sentiment detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
response = client.list_sentiment_detection_jobs(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
dict
Response Syntax
{
'SentimentDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
SentimentDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a sentiment detection job.
JobId (string) --
The identifier assigned to the sentiment detection job.
JobName (string) --
The name that you assigned to the sentiment detection job
JobStatus (string) --
The current status of the sentiment detection job. If the status is FAILED , the Messages field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the sentiment detection job was submitted for processing.
EndTime (datetime) --
The time that the sentiment detection job ended.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
NextToken (string) --
Identifies the next page of results to return.
Gets a list of the topic detection jobs that you have submitted.
See also: AWS API Documentation
Request Syntax
response = client.list_topics_detection_jobs(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
NextToken='string',
MaxResults=123
)
Filters the jobs that are returned. Jobs can be filtered on their name, status, or the date and time that they were submitted. You can set only one filter at a time.
Filters the list of topic detection jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
dict
Response Syntax
{
'TopicsDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'NumberOfTopics': 123
},
],
'NextToken': 'string'
}
Response Structure
(dict) --
TopicsDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a topic detection job.
JobId (string) --
The identifier assigned to the topic detection job.
JobName (string) --
The name of the topic detection job.
JobStatus (string) --
The current status of the topic detection job. If the status is Failed , the reason for the failure is shown in the Message field.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the topic detection job was submitted for processing.
EndTime (datetime) --
The time that the topic detection job was completed.
InputDataConfig (dict) --
The input data configuration supplied when you created the topic detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration supplied when you created the topic detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
NumberOfTopics (integer) --
The number of topics to detect supplied when you created the topic detection job. The default is 10.
NextToken (string) --
Identifies the next page of results to return.
Starts an asynchronous document classification job. Use the operation to track the progress of the job.
See also: AWS API Documentation
Request Syntax
response = client.start_document_classification_job(
JobName='string',
DocumentClassifierArn='string',
InputDataConfig={
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
OutputDataConfig={
'S3Uri': 'string'
},
DataAccessRoleArn='string',
ClientRequestToken='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) of the document classifier to use to process the job.
[REQUIRED]
Specifies the format and location of the input data for the job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
[REQUIRED]
Specifies where to send the output files.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data.
A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of the job, use this identifier with the operation.
JobStatus (string) --
The status of the job:
Starts an asynchronous dominant language detection job for a collection of documents. Use the operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
response = client.start_dominant_language_detection_job(
InputDataConfig={
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
OutputDataConfig={
'S3Uri': 'string'
},
DataAccessRoleArn='string',
JobName='string',
ClientRequestToken='string'
)
[REQUIRED]
Specifies the format and location of the input data for the job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
[REQUIRED]
Specifies where to send the output files.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data. For more information, see https://docs.aws.amazon.com/comprehend/latest/dg/access-control-managing-permissions.html#auth-role-permissions .
A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of a job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
Starts an asynchronous entity detection job for a collection of documents. Use the operation to track the status of a job.
This API can be used for either standard entity detection or custom entity recognition. In order to be used for custom entity recognition, the optional EntityRecognizerArn must be used in order to provide access to the recognizer being used to detect the custom entity.
See also: AWS API Documentation
Request Syntax
response = client.start_entities_detection_job(
InputDataConfig={
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
OutputDataConfig={
'S3Uri': 'string'
},
DataAccessRoleArn='string',
JobName='string',
EntityRecognizerArn='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt',
ClientRequestToken='string'
)
[REQUIRED]
Specifies the format and location of the input data for the job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
[REQUIRED]
Specifies where to send the output files.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data. For more information, see https://docs.aws.amazon.com/comprehend/latest/dg/access-control-managing-permissions.html#auth-role-permissions .
[REQUIRED]
The language of the input documents. All documents must be in the same language. You can specify any of the languages supported by Amazon Comprehend: English ("en"), Spanish ("es"), French ("fr"), German ("de"), Italian ("it"), or Portuguese ("pt"). If custom entities recognition is used, this parameter is ignored and the language used for training the model is used instead.
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
Starts an asynchronous key phrase detection job for a collection of documents. Use the operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
response = client.start_key_phrases_detection_job(
InputDataConfig={
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
OutputDataConfig={
'S3Uri': 'string'
},
DataAccessRoleArn='string',
JobName='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt',
ClientRequestToken='string'
)
[REQUIRED]
Specifies the format and location of the input data for the job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
[REQUIRED]
Specifies where to send the output files.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data. For more information, see https://docs.aws.amazon.com/comprehend/latest/dg/access-control-managing-permissions.html#auth-role-permissions .
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of a job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
Starts an asynchronous sentiment detection job for a collection of documents. use the operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
response = client.start_sentiment_detection_job(
InputDataConfig={
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
OutputDataConfig={
'S3Uri': 'string'
},
DataAccessRoleArn='string',
JobName='string',
LanguageCode='en'|'es'|'fr'|'de'|'it'|'pt',
ClientRequestToken='string'
)
[REQUIRED]
Specifies the format and location of the input data for the job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
[REQUIRED]
Specifies where to send the output files.
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data. For more information, see https://docs.aws.amazon.com/comprehend/latest/dg/access-control-managing-permissions.html#auth-role-permissions .
[REQUIRED]
The language of the input documents. You can specify English ("en") or Spanish ("es"). All documents must be in the same language.
A unique identifier for the request. If you don't set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of a job, use this identifier with the operation.
JobStatus (string) --
The status of the job.
Starts an asynchronous topic detection job. Use the DescribeTopicDetectionJob operation to track the status of a job.
See also: AWS API Documentation
Request Syntax
response = client.start_topics_detection_job(
InputDataConfig={
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
OutputDataConfig={
'S3Uri': 'string'
},
DataAccessRoleArn='string',
JobName='string',
NumberOfTopics=123,
ClientRequestToken='string'
)
[REQUIRED]
Specifies the format and location of the input data for the job.
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
Specifies how the text in an input file should be processed:
[REQUIRED]
Specifies where to send the output files. The output is a compressed archive with two files, topic-terms.csv that lists the terms associated with each topic, and doc-topics.csv that lists the documents associated with each topic
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
[REQUIRED]
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data. For more information, see https://docs.aws.amazon.com/comprehend/latest/dg/access-control-managing-permissions.html#auth-role-permissions .
A unique identifier for the request. If you do not set the client request token, Amazon Comprehend generates one.
This field is autopopulated if not provided.
dict
Response Syntax
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
(dict) --
JobId (string) --
The identifier generated for the job. To get the status of the job, use this identifier with the DescribeTopicDetectionJob operation.
JobStatus (string) --
The status of the job:
Stops a dominant language detection job in progress.
If the job state is IN_PROGRESS the job is marked for termination and put into the STOP_REQUESTED state. If the job completes before it can be stopped, it is put into the COMPLETED state; otherwise the job is stopped and put into the STOPPED state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
See also: AWS API Documentation
Request Syntax
response = client.stop_dominant_language_detection_job(
JobId='string'
)
[REQUIRED]
The identifier of the dominant language detection job to stop.
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
The identifier of the dominant language detection job to stop.
Either STOP_REQUESTED if the job is currently running, or STOPPED if the job was previously stopped with the StopDominantLanguageDetectionJob operation.
Stops an entities detection job in progress.
If the job state is IN_PROGRESS the job is marked for termination and put into the STOP_REQUESTED state. If the job completes before it can be stopped, it is put into the COMPLETED state; otherwise the job is stopped and put into the STOPPED state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
See also: AWS API Documentation
Request Syntax
response = client.stop_entities_detection_job(
JobId='string'
)
[REQUIRED]
The identifier of the entities detection job to stop.
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
The identifier of the entities detection job to stop.
Either STOP_REQUESTED if the job is currently running, or STOPPED if the job was previously stopped with the StopEntitiesDetectionJob operation.
Stops a key phrases detection job in progress.
If the job state is IN_PROGRESS the job is marked for termination and put into the STOP_REQUESTED state. If the job completes before it can be stopped, it is put into the COMPLETED state; otherwise the job is stopped and put into the STOPPED state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
See also: AWS API Documentation
Request Syntax
response = client.stop_key_phrases_detection_job(
JobId='string'
)
[REQUIRED]
The identifier of the key phrases detection job to stop.
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
The identifier of the key phrases detection job to stop.
Either STOP_REQUESTED if the job is currently running, or STOPPED if the job was previously stopped with the StopKeyPhrasesDetectionJob operation.
Stops a sentiment detection job in progress.
If the job state is IN_PROGRESS the job is marked for termination and put into the STOP_REQUESTED state. If the job completes before it can be stopped, it is put into the COMPLETED state; otherwise the job is be stopped and put into the STOPPED state.
If the job is in the COMPLETED or FAILED state when you call the StopDominantLanguageDetectionJob operation, the operation returns a 400 Internal Request Exception.
When a job is stopped, any documents already processed are written to the output location.
See also: AWS API Documentation
Request Syntax
response = client.stop_sentiment_detection_job(
JobId='string'
)
[REQUIRED]
The identifier of the sentiment detection job to stop.
{
'JobId': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED'
}
Response Structure
The identifier of the sentiment detection job to stop.
Either STOP_REQUESTED if the job is currently running, or STOPPED if the job was previously stopped with the StopSentimentDetectionJob operation.
Stops a document classifier training job while in progress.
If the training job state is TRAINING , the job is marked for termination and put into the STOP_REQUESTED state. If the training job completes before it can be stopped, it is put into the TRAINED ; otherwise the training job is stopped and put into the STOPPED state and the service sends back an HTTP 200 response with an empty HTTP body.
See also: AWS API Documentation
Request Syntax
response = client.stop_training_document_classifier(
DocumentClassifierArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) that identifies the document classifier currently being trained.
{}
Response Structure
Stops an entity recognizer training job while in progress.
If the training job state is TRAINING , the job is marked for termination and put into the STOP_REQUESTED state. If the training job completes before it can be stopped, it is put into the TRAINED ; otherwise the training job is stopped and putted into the STOPPED state and the service sends back an HTTP 200 response with an empty HTTP body.
See also: AWS API Documentation
Request Syntax
response = client.stop_training_entity_recognizer(
EntityRecognizerArn='string'
)
[REQUIRED]
The Amazon Resource Name (ARN) that identifies the entity recognizer currently being trained.
{}
Response Structure
The available paginators are:
paginator = client.get_paginator('list_document_classification_jobs')
Creates an iterator that will paginate through responses from Comprehend.Client.list_document_classification_jobs().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters the jobs that are returned. You can filter jobs on their names, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'DocumentClassificationJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'DocumentClassifierArn': 'string',
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'DataAccessRoleArn': 'string'
},
],
}
Response Structure
(dict) --
DocumentClassificationJobPropertiesList (list) --
A list containing the properties of each job returned.
(dict) --
Provides information about a document classification job.
JobId (string) --
The identifier assigned to the document classification job.
JobName (string) --
The name that you assigned to the document classification job.
JobStatus (string) --
The current status of the document classification job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of the job.
SubmitTime (datetime) --
The time that the document classification job was submitted for processing.
EndTime (datetime) --
The time that the document classification job completed.
DocumentClassifierArn (string) --
The Amazon Resource Name (ARN) that identifies the document classifier.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the document classification job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the document classification job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the AWS identity and Access Management (IAM) role that grants Amazon Comprehend read access to your input data.
paginator = client.get_paginator('list_document_classifiers')
Creates an iterator that will paginate through responses from Comprehend.Client.list_document_classifiers().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters the list of classifiers based on status.
Filters the list of classifiers based on the time that the classifier was submitted for processing. Returns only classifiers submitted before the specified time. Classifiers are returned in ascending order, oldest to newest.
Filters the list of classifiers based on the time that the classifier was submitted for processing. Returns only classifiers submitted after the specified time. Classifiers are returned in descending order, newest to oldest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'DocumentClassifierPropertiesList': [
{
'DocumentClassifierArn': 'string',
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'TrainingStartTime': datetime(2015, 1, 1),
'TrainingEndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string'
},
'ClassifierMetadata': {
'NumberOfLabels': 123,
'NumberOfTrainedDocuments': 123,
'NumberOfTestDocuments': 123,
'EvaluationMetrics': {
'Accuracy': 123.0,
'Precision': 123.0,
'Recall': 123.0,
'F1Score': 123.0
}
},
'DataAccessRoleArn': 'string'
},
],
}
Response Structure
(dict) --
DocumentClassifierPropertiesList (list) --
A list containing the properties of each job returned.
(dict) --
Provides information about a document classifier.
DocumentClassifierArn (string) --
The Amazon Resource Name (ARN) that identifies the document classifier.
LanguageCode (string) --
The language code for the language of the documents that the classifier was trained on.
Status (string) --
The status of the document classifier. If the status is TRAINED the classifier is ready to use. If the status is FAILED you can see additional information about why the classifier wasn't trained in the Message field.
Message (string) --
Additional information about the status of the classifier.
SubmitTime (datetime) --
The time that the document classifier was submitted for training.
EndTime (datetime) --
The time that training the document classifier completed.
TrainingStartTime (datetime) --
Indicates the time when the training starts on documentation classifiers. You are billed for the time interval between this time and the value of TrainingEndTime.
TrainingEndTime (datetime) --
The time that training of the document classifier was completed. Indicates the time when the training completes on documentation classifiers. You are billed for the time interval between this time and the value of TrainingStartTime.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the document classifier for training.
S3Uri (string) --
The Amazon S3 URI for the input data. The S3 bucket must be in the same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of input files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
ClassifierMetadata (dict) --
Information about the document classifier, including the number of documents used for training the classifier, the number of documents used for test the classifier, and an accuracy rating.
NumberOfLabels (integer) --
The number of labels in the input data.
NumberOfTrainedDocuments (integer) --
The number of documents in the input data that were used to train the classifier. Typically this is 80 to 90 percent of the input documents.
NumberOfTestDocuments (integer) --
The number of documents in the input data that were used to test the classifier. Typically this is 10 to 20 percent of the input documents.
EvaluationMetrics (dict) --
Describes the result metrics for the test data associated with an documentation classifier.
Accuracy (float) --
The fraction of the labels that were correct recognized. It is computed by dividing the number of labels in the test documents that were correctly recognized by the total number of labels in the test documents.
Precision (float) --
A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.
Recall (float) --
A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.
F1Score (float) --
A measure of how accurate the classifier results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
paginator = client.get_paginator('list_dominant_language_detection_jobs')
Creates an iterator that will paginate through responses from Comprehend.Client.list_dominant_language_detection_jobs().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters that jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'DominantLanguageDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'DataAccessRoleArn': 'string'
},
],
}
Response Structure
(dict) --
DominantLanguageDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a dominant language detection job.
JobId (string) --
The identifier assigned to the dominant language detection job.
JobName (string) --
The name that you assigned to the dominant language detection job.
JobStatus (string) --
The current status of the dominant language detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the dominant language detection job was submitted for processing.
EndTime (datetime) --
The time that the dominant language detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the dominant language detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
paginator = client.get_paginator('list_entities_detection_jobs')
Creates an iterator that will paginate through responses from Comprehend.Client.list_entities_detection_jobs().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'EntitiesDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'EntityRecognizerArn': 'string',
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
},
],
}
Response Structure
(dict) --
EntitiesDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about an entities detection job.
JobId (string) --
The identifier assigned to the entities detection job.
JobName (string) --
The name that you assigned the entities detection job.
JobStatus (string) --
The current status of the entities detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the entities detection job was submitted for processing.
EndTime (datetime) --
The time that the entities detection job completed
EntityRecognizerArn (string) --
The Amazon Resource Name (ARN) that identifies the entity recognizer.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the entities detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
paginator = client.get_paginator('list_entity_recognizers')
Creates an iterator that will paginate through responses from Comprehend.Client.list_entity_recognizers().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters the list of entities returned. You can filter on Status , SubmitTimeBefore , or SubmitTimeAfter . You can only set one filter at a time.
The status of an entity recognizer.
Filters the list of entities based on the time that the list was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
Filters the list of entities based on the time that the list was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'EntityRecognizerPropertiesList': [
{
'EntityRecognizerArn': 'string',
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'Status': 'SUBMITTED'|'TRAINING'|'DELETING'|'STOP_REQUESTED'|'STOPPED'|'IN_ERROR'|'TRAINED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'TrainingStartTime': datetime(2015, 1, 1),
'TrainingEndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'EntityTypes': [
{
'Type': 'string'
},
],
'Documents': {
'S3Uri': 'string'
},
'Annotations': {
'S3Uri': 'string'
},
'EntityList': {
'S3Uri': 'string'
}
},
'RecognizerMetadata': {
'NumberOfTrainedDocuments': 123,
'NumberOfTestDocuments': 123,
'EvaluationMetrics': {
'Precision': 123.0,
'Recall': 123.0,
'F1Score': 123.0
},
'EntityTypes': [
{
'Type': 'string'
},
]
},
'DataAccessRoleArn': 'string'
},
],
}
Response Structure
(dict) --
EntityRecognizerPropertiesList (list) --
The list of properties of an entity recognizer.
(dict) --
Describes information about an entity recognizer.
EntityRecognizerArn (string) --
The Amazon Resource Name (ARN) that identifies the entity recognizer.
LanguageCode (string) --
The language of the input documents. All documents must be in the same language. Only English ("en") is currently supported.
Status (string) --
Provides the status of the entity recognizer.
Message (string) --
A description of the status of the recognizer.
SubmitTime (datetime) --
The time that the recognizer was submitted for processing.
EndTime (datetime) --
The time that the recognizer creation completed.
TrainingStartTime (datetime) --
The time that training of the entity recognizer started.
TrainingEndTime (datetime) --
The time that training of the entity recognizer was completed.
InputDataConfig (dict) --
The input data properties of an entity recognizer.
EntityTypes (list) --
The entity types in the input data for an entity recognizer.
(dict) --
Information about an individual item on a list of entity types.
Type (string) --
Entity type of an item on an entity type list.
Documents (dict) --
S3 location of the documents folder for an entity recognizer
S3Uri (string) --
Specifies the Amazon S3 location where the training documents for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
Annotations (dict) --
S3 location of the annotations file for an entity recognizer.
S3Uri (string) --
Specifies the Amazon S3 location where the annotations for an entity recognizer are located. The URI must be in the same region as the API endpoint that you are calling.
EntityList (dict) --
S3 location of the entity list for an entity recognizer.
S3Uri (string) --
Specifies the Amazon S3 location where the entity list is located. The URI must be in the same region as the API endpoint that you are calling.
RecognizerMetadata (dict) --
Provides information about an entity recognizer.
NumberOfTrainedDocuments (integer) --
The number of documents in the input data that were used to train the entity recognizer. Typically this is 80 to 90 percent of the input documents.
NumberOfTestDocuments (integer) --
The number of documents in the input data that were used to test the entity recognizer. Typically this is 10 to 20 percent of the input documents.
EvaluationMetrics (dict) --
Detailed information about the accuracy of an entity recognizer.
Precision (float) --
A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.
Recall (float) --
A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.
F1Score (float) --
A measure of how accurate the recognizer results are for the test data. It is derived from the Precision and Recall values. The F1Score is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
EntityTypes (list) --
Entity types from the metadata of an entity recognizer.
(dict) --
Individual item from the list of entity types in the metadata of an entity recognizer.
Type (string) --
Type of entity from the list of entity types in the metadata of an entity recognizer.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) of the AWS Identity and Management (IAM) role that grants Amazon Comprehend read access to your input data.
paginator = client.get_paginator('list_key_phrases_detection_jobs')
Creates an iterator that will paginate through responses from Comprehend.Client.list_key_phrases_detection_jobs().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'KeyPhrasesDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
},
],
}
Response Structure
(dict) --
KeyPhrasesDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a key phrases detection job.
JobId (string) --
The identifier assigned to the key phrases detection job.
JobName (string) --
The name that you assigned the key phrases detection job.
JobStatus (string) --
The current status of the key phrases detection job. If the status is FAILED , the Message field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the key phrases detection job was submitted for processing.
EndTime (datetime) --
The time that the key phrases detection job completed.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the key phrases detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
paginator = client.get_paginator('list_sentiment_detection_jobs')
Creates an iterator that will paginate through responses from Comprehend.Client.list_sentiment_detection_jobs().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters the jobs that are returned. You can filter jobs on their name, status, or the date and time that they were submitted. You can only set one filter at a time.
Filters on the name of the job.
Filters the list of jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
Filters the list of jobs based on the time that the job was submitted for processing. Returns only jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'SentimentDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'LanguageCode': 'en'|'es'|'fr'|'de'|'it'|'pt',
'DataAccessRoleArn': 'string'
},
],
}
Response Structure
(dict) --
SentimentDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a sentiment detection job.
JobId (string) --
The identifier assigned to the sentiment detection job.
JobName (string) --
The name that you assigned to the sentiment detection job
JobStatus (string) --
The current status of the sentiment detection job. If the status is FAILED , the Messages field shows the reason for the failure.
Message (string) --
A description of the status of a job.
SubmitTime (datetime) --
The time that the sentiment detection job was submitted for processing.
EndTime (datetime) --
The time that the sentiment detection job ended.
InputDataConfig (dict) --
The input data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration that you supplied when you created the sentiment detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
LanguageCode (string) --
The language code of the input documents.
DataAccessRoleArn (string) --
The Amazon Resource Name (ARN) that gives Amazon Comprehend read access to your input data.
paginator = client.get_paginator('list_topics_detection_jobs')
Creates an iterator that will paginate through responses from Comprehend.Client.list_topics_detection_jobs().
See also: AWS API Documentation
Request Syntax
response_iterator = paginator.paginate(
Filter={
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'SubmitTimeBefore': datetime(2015, 1, 1),
'SubmitTimeAfter': datetime(2015, 1, 1)
},
PaginationConfig={
'MaxItems': 123,
'PageSize': 123,
'StartingToken': 'string'
}
)
Filters the jobs that are returned. Jobs can be filtered on their name, status, or the date and time that they were submitted. You can set only one filter at a time.
Filters the list of topic detection jobs based on job status. Returns only jobs with the specified status.
Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted before the specified time. Jobs are returned in descending order, newest to oldest.
Filters the list of jobs based on the time that the job was submitted for processing. Only returns jobs submitted after the specified time. Jobs are returned in ascending order, oldest to newest.
A dictionary that provides parameters to control pagination.
The total number of items to return. If the total number of items available is more than the value specified in max-items then a NextToken will be provided in the output that you can use to resume pagination.
The size of each page.
A token to specify where to start paginating. This is the NextToken from a previous response.
dict
Response Syntax
{
'TopicsDetectionJobPropertiesList': [
{
'JobId': 'string',
'JobName': 'string',
'JobStatus': 'SUBMITTED'|'IN_PROGRESS'|'COMPLETED'|'FAILED'|'STOP_REQUESTED'|'STOPPED',
'Message': 'string',
'SubmitTime': datetime(2015, 1, 1),
'EndTime': datetime(2015, 1, 1),
'InputDataConfig': {
'S3Uri': 'string',
'InputFormat': 'ONE_DOC_PER_FILE'|'ONE_DOC_PER_LINE'
},
'OutputDataConfig': {
'S3Uri': 'string'
},
'NumberOfTopics': 123
},
],
}
Response Structure
(dict) --
TopicsDetectionJobPropertiesList (list) --
A list containing the properties of each job that is returned.
(dict) --
Provides information about a topic detection job.
JobId (string) --
The identifier assigned to the topic detection job.
JobName (string) --
The name of the topic detection job.
JobStatus (string) --
The current status of the topic detection job. If the status is Failed , the reason for the failure is shown in the Message field.
Message (string) --
A description for the status of a job.
SubmitTime (datetime) --
The time that the topic detection job was submitted for processing.
EndTime (datetime) --
The time that the topic detection job was completed.
InputDataConfig (dict) --
The input data configuration supplied when you created the topic detection job.
S3Uri (string) --
The Amazon S3 URI for the input data. The URI must be in same region as the API endpoint that you are calling. The URI can point to a single input file or it can provide the prefix for a collection of data files.
For example, if you use the URI S3://bucketName/prefix , if the prefix is a single file, Amazon Comprehend uses that file as input. If more than one file begins with the prefix, Amazon Comprehend uses all of them as input.
InputFormat (string) --
Specifies how the text in an input file should be processed:
OutputDataConfig (dict) --
The output data configuration supplied when you created the topic detection job.
S3Uri (string) --
When you use the OutputDataConfig object with asynchronous operations, you specify the Amazon S3 location where you want to write the output data. The URI must be in the same region as the API endpoint that you are calling. The location is used as the prefix for the actual location of the output file.
When the topic detection job is finished, the service creates an output file in a directory specific to the job. The S3Uri field contains the location of the output file, called output.tar.gz . It is a compressed archive that contains the ouput of the operation.
NumberOfTopics (integer) --
The number of topics to detect supplied when you created the topic detection job. The default is 10.