This output lets you store logs in elasticsearch. It’s similar to the ‘elasticsearch’ output but improves performance by using a queue server, rabbitmq, to send data to elasticsearch.
Upon startup, this output will automatically contact an elasticsearch cluster and configure it to read from the queue to which we write.
You can learn more about elasticseasrch at http://elasticsearch.org More about the elasticsearch rabbitmq river plugin: https://github.com/elasticsearch/elasticsearch-river-rabbitmq/blob/master/README.md
output {
elasticsearch_river {
codec => ... # codec (optional), default: "plain"
document_id => ... # string (optional), default: nil
durable => ... # boolean (optional), default: true
es_bulk_size => ... # number (optional), default: 1000
es_bulk_timeout_ms => ... # number (optional), default: 100
es_host => ... # string (required)
es_ordered => ... # boolean (optional), default: false
es_port => ... # number (optional), default: 9200
exchange => ... # string (optional), default: "elasticsearch"
exchange_type => ... # string, one of ["fanout", "direct", "topic"] (optional), default: "direct"
index => ... # string (optional), default: "logstash-%{+YYYY.MM.dd}"
index_type => ... # string (optional), default: "%{type}"
key => ... # string (optional), default: "elasticsearch"
password => ... # string (optional), default: "guest"
persistent => ... # boolean (optional), default: true
queue => ... # string (optional), default: "elasticsearch"
rabbitmq_host => ... # string (required)
rabbitmq_port => ... # number (optional), default: 5672
user => ... # string (optional), default: "guest"
vhost => ... # string (optional), default: "/"
workers => ... # number (optional), default: 1
}
}
The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output, without needing a separate filter in your Logstash pipeline.
The document ID for the index. Useful for overwriting existing entries in elasticsearch with the same ID.
RabbitMQ durability setting. Also used for ElasticSearch setting
ElasticSearch river configuration: bulk fetch size
ElasticSearch river configuration: bulk timeout in milliseconds
The name/address of an ElasticSearch host to use for river creation
ElasticSearch river configuration: is ordered?
ElasticSearch API port
RabbitMQ exchange name
The exchange type (fanout, topic, direct)
Only handle events without any of these tags. Note this check is additional to type and tags.
The index to write events to. This can be dynamic using the %{foo} syntax. The default value will partition your indeces by day so you can more easily delete old data or only search specific date ranges.
The index type to write events to. Generally you should try to write only similar events to the same ‘type’. String expansion ‘%{foo}’ works here.
RabbitMQ routing key
RabbitMQ password
RabbitMQ persistence setting
RabbitMQ queue name
Hostname of RabbitMQ server
Port of RabbitMQ server
Only handle events with all of these tags. Note that if you specify a type, the event must also match that type. Optional.
The type to act on. If a type is given, then this output will only act on messages with the same type. See any input plugin’s “type” attribute for more. Optional.
RabbitMQ user
RabbitMQ vhost
The number of workers to use for this output. Note that this setting may not be useful for all outputs.