This TokenFilter limits the number of tokens while indexing. It is a replacement for the maximum field length setting inside {@link org.apache.lucene.index.IndexWriter}.
By default, this filter ignores any tokens in the wrapped {@code TokenStream}once the limit has been reached, which can result in {@code reset()} being called prior to {@code incrementToken()} returning {@code false}. For most {@code TokenStream} implementations this should be acceptable, and faster then consuming the full stream. If you are wrapping a {@code TokenStream} which requires that the full stream of tokens be exhausted in order to function properly, use the {@link #LimitTokenCountFilter(TokenStream,int,boolean) consumeAllTokens} option.