The classic tokenizer is a grammar based tokenizer that is good for English
language documents. This tokenizer has heuristics for special treatment of
acronyms, company names, email addresses, and internet host names. However,
these rules don’t always work, and the tokenizer doesn’t work well for most
languages other than English:
POST _analyze
{
  "tokenizer": "classic",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}The above sentence would produce the following terms:
[ The, 2, QUICK, Brown, Foxes, jumped, over, the, lazy, dog's, bone ]
The classic tokenizer accepts the following parameters:
| 
 | 
    The maximum token length. If a token is seen that exceeds this length then
    it is split at  | 
In this example, we configure the classic tokenizer to have a
max_token_length of 5 (for demonstration purposes):
PUT my_index
{
  "settings": {
    "analysis": {
      "analyzer": {
        "my_analyzer": {
          "tokenizer": "my_tokenizer"
        }
      },
      "tokenizer": {
        "my_tokenizer": {
          "type": "classic",
          "max_token_length": 5
        }
      }
    }
  }
}
POST my_index/_analyze
{
  "analyzer": "my_analyzer",
  "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
}The above example produces the following terms:
[ The, 2, QUICK, Brown, Foxes, jumpe, d, over, the, lazy, dog's, bone ]