Stream events from files.
By default, each event is assumed to be one line. If you would like to join multiple log lines into one event, you’ll want to use the multiline codec.
Files are followed in a manner similar to “tail -0F”. File rotation is detected and handled by this input.
input {
file {
add_field => ... # hash (optional), default: {}
codec => ... # codec (optional), default: "plain"
discover_interval => ... # number (optional), default: 15
exclude => ... # array (optional)
path => ... # array (required)
sincedb_path => ... # string (optional)
sincedb_write_interval => ... # number (optional), default: 15
start_position => ... # string, one of ["beginning", "end"] (optional), default: "end"
stat_interval => ... # number (optional), default: 1
tags => ... # array (optional)
type => ... # string (optional)
}
}
Add a field to an event
The character encoding used in this input. Examples include “UTF-8” and “cp1252”
This setting is useful if your log files are in Latin-1 (aka cp1252) or in another character set other than UTF-8.
This only affects “plain” format logs since json is UTF-8 already.
The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.
How often we expand globs to discover new files to watch.
Exclusions (matched against the filename, not full path). Globs are valid here, too. For example, if you have
path => "/var/log/*"
You might want to exclude gzipped files:
exclude => "*.gz"
The format of input data (plain, json, json_event)
If format is “json”, an event sprintf string to build what the display @message should be given (defaults to the raw JSON). sprintf format strings look like %{fieldname}
If format is “json_event”, ALL fields except for @type are expected to be present. Not receiving all fields will cause unexpected results.
TODO(sissel): This should switch to use the ‘line’ codec by default
once file following
The path(s) to the file(s) to use as an input.
You can use globs here, such as /var/log/*.log
Paths must be absolute and cannot be relative.
You may also configure multiple paths. See an example on the Logstash configuration page.
Where to write the sincedb database (keeps track of the current position of monitored log files). The default will write sincedb files to some path matching “$HOME/.sincedb*”
How often (in seconds) to write a since database with the current position of monitored log files.
Choose where Logstash starts initially reading files: at the beginning or at the end. The default behavior treats files like live streams and thus starts at the end. If you have old data you want to import, set this to ‘beginning’
This option only modifies “first contact” situations where a file is new and not seen before. If a file has already been seen before, this option has no effect.
How often we stat files to see if they have been modified. Increasing this interval will decrease the number of system calls we make, but increase the time to detect new log lines.
Add any number of arbitrary tags to your event.
This can help with processing later.
Add a ‘type’ field to all events handled by this input.
Types are used mainly for filter activation.
The type is stored as part of the event itself, so you can also use the type to search for it in the web interface.
If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server.