This output lets you index&store your logs in Solr. If you want to get started quickly you should use version 4.4 or above in schemaless mode, which will try and guess your fields automatically. To turn that on, you can use the example included in the Solr archive:
tar zxf solr-4.4.0.tgz
cd example
mv solr solr_ #back up the existing sample conf
cp -r example-schemaless/solr/ . #put the schemaless conf in place
java -jar start.jar #start Solr
You can learn more about Solr at https://lucene.apache.org/solr/
output {
solr_http {
codec => ... # codec (optional), default: "plain"
document_id => ... # string (optional), default: nil
flush_size => ... # number (optional), default: 100
idle_flush_time => ... # number (optional), default: 1
solr_url => ... # string (optional), default: "http://localhost:8983/solr"
workers => ... # number (optional), default: 1
}
}
The codec used for output data. Output codecs are a convenient method for encoding your data before it leaves the output, without needing a separate filter in your Logstash pipeline.
Solr document ID for events. You’d typically have a variable here, like ‘%{foo}’ so you can assign your own IDs
Only handle events without any of these tags. Note this check is additional to type and tags.
Number of events to queue up before writing to Solr
Amount of time since the last flush before a flush is done even if the number of buffered events is smaller than flush_size
URL used to connect to Solr
Only handle events with all of these tags. Note that if you specify a type, the event must also match that type. Optional.
The type to act on. If a type is given, then this output will only act on messages with the same type. See any input plugin’s “type” attribute for more. Optional.
The number of workers to use for this output. Note that this setting may not be useful for all outputs.