Re: ITokenizer instance is ExtendedWhitespaceTokenizer not ChineseTokenizerAdapter
The fallback will only be used if the primary is not available
(because of a missing optional JAR for example). I've just checked and
it works fine given you have Lucene in path (because that's what being
used for tokenizing Chinese).
Can you use JIRA for submitting bug reports, please? And best: attach
a snippet of code that reproduces the buggy behavior. Thanks.