Frames | No Frames |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | A filter that replaces accented characters in the ISO Latin 1 character set
(ISO-8859-1) by their unaccented equivalent. |
class | Removes words that are too long and too short from the stream. |
class | Normalizes token text to lower case. |
class | Transforms the token stream as per the Porter stemming algorithm. |
class | Removes stop words from a token stream. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | Based on GermanStemFilter
|
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | Title: ChineseFilter
Description: Filter with a stop word table
Rule: No digital is allowed. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | A filter that stems German words. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | Normalizes token text to lower case, analyzing given ("greek") charset. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | A filter that stemms french words. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | A filter that stems Dutch words. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | Normalizes token text to lower case, analyzing given ("russian") charset. |
class | A filter that stems Russian words. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | A filter that stems words using a Snowball-generated stemmer. |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | Normalizes tokens extracted with StandardTokenizer . |
Classes derived from org.apache.lucene.analysis.TokenFilter | |
class | Injects additional tokens for synonyms of token terms fetched from the
underlying child stream; the child stream must deliver lowercase tokens
for synonyms to be found. |