The standard
analyzer is the default analyzer which is used if none is
specified. It provides grammar based tokenization (based on the Unicode Text
Segmentation algorithm, as specified in
Unicode Standard Annex #29) and works well
for most languages.
POST _analyze { "analyzer": "standard", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." }
The above sentence would produce the following terms:
[ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog's, bone ]
The standard
analyzer accepts the following parameters:
|
The maximum token length. If a token is seen that exceeds this length then
it is split at |
|
A pre-defined stop words list like |
|
The path to a file containing stop words. |
See the Stop Token Filter for more information about stop word configuration.
In this example, we configure the standard
analyzer to have a
max_token_length
of 5 (for demonstration purposes), and to use the
pre-defined list of English stop words:
PUT my_index { "settings": { "analysis": { "analyzer": { "my_english_analyzer": { "type": "standard", "max_token_length": 5, "stopwords": "_english_" } } } } } POST my_index/_analyze { "analyzer": "my_english_analyzer", "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone." }
The above example produces the following terms:
[ 2, quick, brown, foxes, jumpe, d, over, lazy, dog's, bone ]
The standard
analyzer consists of:
If you need to customize the standard
analyzer beyond the configuration
parameters then you need to recreate it as a custom
analyzer and modify
it, usually by adding token filters. This would recreate the built-in
standard
analyzer and you can use it as a starting point: