Pull events from an Amazon Web Services Simple Queue Service (SQS) queue.
SQS is a simple, scalable queue system that is part of the Amazon Web Services suite of tools.
Although SQS is similar to other queuing systems like AMQP, it uses a custom API and requires that you have an AWS account. See http://aws.amazon.com/sqs/ for more details on how SQS works, what the pricing schedule looks like and how to setup a queue.
To use this plugin, you must:
The “consumer” identity must have the following permissions on the queue:
Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. A sample policy is as follows:
{
"Statement": [
{
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:ChangeMessageVisibilityBatch",
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ListQueues",
"sqs:SendMessage",
"sqs:SendMessageBatch"
],
"Effect": "Allow",
"Resource": [
"arn:aws:sqs:us-east-1:123456789012:Logstash"
]
}
]
}
See http://aws.amazon.com/iam/ for more details on setting up AWS identities.
input {
sqs {
access_key_id => ... # string (optional)
add_field => ... # hash (optional), default: {}
aws_credentials_file => ... # string (optional)
codec => ... # codec (optional), default: "plain"
id_field => ... # string (optional)
md5_field => ... # string (optional)
proxy_uri => ... # string (optional)
queue => ... # string (required)
region => ... # string, one of ["us-east-1", "us-west-1", "us-west-2", "eu-west-1", "ap-southeast-1", "ap-southeast-2", "ap-northeast-1", "sa-east-1", "us-gov-west-1"] (optional), default: "us-east-1"
secret_access_key => ... # string (optional)
sent_timestamp_field => ... # string (optional)
tags => ... # array (optional)
threads => ... # number (optional), default: 1
type => ... # string (optional)
use_ssl => ... # boolean (optional), default: true
}
}
This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order…
1. Static configuration, using access_key_id
and secret_access_key
params in logstash plugin config
2. External credentials file specified by aws_credentials_file
3. Environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
4. Environment variables AMAZON_ACCESS_KEY_ID
and AMAZON_SECRET_ACCESS_KEY
5. IAM Instance Profile (available when running inside EC2)
Add a field to an event
Path to YAML file containing a hash of AWS credentials.
This file will only be loaded if access_key_id
and
secret_access_key
aren’t set. The contents of the
file should look like this:
:access_key_id: "12345"
:secret_access_key: "54321"
The character encoding used in this input. Examples include “UTF-8” and “cp1252”
This setting is useful if your log files are in Latin-1 (aka cp1252) or in another character set other than UTF-8.
This only affects “plain” format logs since json is UTF-8 already.
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
The format of input data (plain, json, json_event)
Name of the event field in which to store the SQS message ID
Name of the event field in which to store the SQS message MD5 checksum
If format is “json”, an event sprintf string to build what the display @message should be given (defaults to the raw JSON). sprintf format strings look like %{fieldname}
If format is “json_event”, ALL fields except for @type are expected to be present. Not receiving all fields will cause unexpected results.
URI to proxy server if required
Name of the SQS Queue name to pull messages from. Note that this is just the name of the queue, not the URL or ARN.
The AWS Region
The AWS Secret Access Key
Name of the event field in which to store the SQS message Sent Timestamp
Add any number of arbitrary tags to your event.
This can help with processing later.
Set this to the number of threads you want this input to spawn. This is the same as declaring the input multiple times
Add a ‘type’ field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you can also use the type to search for it in the web interface.
If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server.
Should we require (true) or disable (false) using SSL for communicating with the AWS API
The AWS SDK for Ruby defaults to SSL so we preserve that