|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
| Packages that use CharTokenizer | |
|---|---|
| org.apache.lucene.analysis | API and code to convert text into indexable/searchable tokens. |
| org.apache.lucene.analysis.ru | Analyzer for Russian. |
| Uses of CharTokenizer in org.apache.lucene.analysis |
|---|
| Subclasses of CharTokenizer in org.apache.lucene.analysis | |
|---|---|
class |
LetterTokenizer
A LetterTokenizer is a tokenizer that divides text at non-letters. |
class |
LowerCaseTokenizer
LowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together. |
class |
WhitespaceTokenizer
A WhitespaceTokenizer is a tokenizer that divides text at whitespace. |
| Uses of CharTokenizer in org.apache.lucene.analysis.ru |
|---|
| Subclasses of CharTokenizer in org.apache.lucene.analysis.ru | |
|---|---|
class |
RussianLetterTokenizer
A RussianLetterTokenizer is a tokenizer that extends LetterTokenizer by additionally looking up letters in a given "russian charset". |
|
||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||