|
|||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectorg.apache.lucene.index.IndexWriter
An IndexWriter creates and maintains an index.
The third argument (create
) to the
constructor
determines whether a new index is created, or whether an existing index is
opened for the addition of new documents. Note that you
can open an index with create=true even while readers are
using the index. The old readers will continue to search
the "point in time" snapshot they had opened, and won't
see the newly created index until they re-open.
In either case, documents are added with the addDocument method. When finished adding documents, close should be called.
If an index will not have more documents added for a while and optimal search performance is desired, then the optimize method should be called before the index is closed.
Opening an IndexWriter creates a lock file for the directory in use. Trying to open another IndexWriter on the same directory will lead to an IOException. The IOException is also thrown if an IndexReader on the same directory is used to delete documents from the index.
As of 2.1, IndexWriter can now delete documents
by Term
(see deleteDocuments(org.apache.lucene.index.Term)
) and update
(delete then add) documents (see updateDocument(org.apache.lucene.index.Term, org.apache.lucene.document.Document)
).
Deletes are buffered until setMaxBufferedDeleteTerms(int)
Terms
at which
point they are flushed to the index. Note that a flush
occurs when there are enough buffered deletes or enough
added documents, whichever is sooner. When a flush
occurs, both pending deletes and added documents are
flushed to the index.
Field Summary | |
static int |
DEFAULT_MAX_BUFFERED_DELETE_TERMS
Default value is 1000. |
static int |
DEFAULT_MAX_BUFFERED_DOCS
Default value is 10. |
static int |
DEFAULT_MAX_FIELD_LENGTH
Default value is 10,000. |
static int |
DEFAULT_MAX_MERGE_DOCS
Default value is Integer.MAX_VALUE . |
static int |
DEFAULT_MERGE_FACTOR
Default value is 10. |
static int |
DEFAULT_TERM_INDEX_INTERVAL
Default value is 128. |
static String |
WRITE_LOCK_NAME
|
static long |
WRITE_LOCK_TIMEOUT
Default value for the write lock timeout (1,000). |
Constructor Summary | |
IndexWriter(Directory d,
Analyzer a)
Constructs an IndexWriter for the index in d , creating it first if it does not
already exist, otherwise appending to the existing
index. |
|
IndexWriter(Directory d,
Analyzer a,
boolean create)
Constructs an IndexWriter for the index in d . |
|
IndexWriter(File path,
Analyzer a)
Constructs an IndexWriter for the index in path , creating it first if it does not
already exist, otherwise appending to the existing
index. |
|
IndexWriter(File path,
Analyzer a,
boolean create)
Constructs an IndexWriter for the index in path . |
|
IndexWriter(String path,
Analyzer a)
Constructs an IndexWriter for the index in path , creating it first if it does not
already exist, otherwise appending to the existing
index. |
|
IndexWriter(String path,
Analyzer a,
boolean create)
Constructs an IndexWriter for the index in path . |
Method Summary | |
void |
addDocument(Document doc)
Adds a document to this index. |
void |
addDocument(Document doc,
Analyzer analyzer)
Adds a document to this index, using the provided analyzer instead of the value of getAnalyzer() . |
void |
addIndexes(Directory[] dirs)
Merges all segments from an array of indexes into this index. |
void |
addIndexes(IndexReader[] readers)
Merges the provided indexes into this index. |
void |
addIndexesNoOptimize(Directory[] dirs)
Merges all segments from an array of indexes into this index. |
void |
close()
Flushes all changes to an index and closes all associated files. |
void |
deleteDocuments(Term term)
Deletes the document(s) containing term . |
void |
deleteDocuments(Term[] terms)
Deletes the document(s) containing any of the terms. |
int |
docCount()
Returns the number of documents currently in this index. |
protected void |
finalize()
Release the write lock, if needed. |
void |
flush()
Flush all in-memory buffered updates (adds and deletes) to the Directory. |
Analyzer |
getAnalyzer()
Returns the analyzer used by this index. |
static long |
getDefaultWriteLockTimeout()
|
Directory |
getDirectory()
Returns the Directory used by this index. |
PrintStream |
getInfoStream()
|
int |
getMaxBufferedDeleteTerms()
|
int |
getMaxBufferedDocs()
|
int |
getMaxFieldLength()
|
int |
getMaxMergeDocs()
|
int |
getMergeFactor()
|
Similarity |
getSimilarity()
Expert: Return the Similarity implementation used by this IndexWriter. |
int |
getTermIndexInterval()
Expert: Return the interval between indexed terms. |
boolean |
getUseCompoundFile()
Get the current setting of whether to use the compound file format. |
long |
getWriteLockTimeout()
|
protected void |
maybeFlushRamSegments()
|
int |
numRamDocs()
Expert: Return the number of documents whose segments are currently cached in memory. |
void |
optimize()
Merges all segments together into a single segment, optimizing an index for search. |
long |
ramSizeInBytes()
Expert: Return the total size of all index files currently cached in memory. |
static void |
setDefaultWriteLockTimeout(long writeLockTimeout)
Sets the default (for any instance of IndexWriter) maximum time to wait for a write lock (in milliseconds). |
void |
setInfoStream(PrintStream infoStream)
If non-null, information about merges and a message when maxFieldLength is reached will be printed to this. |
void |
setMaxBufferedDeleteTerms(int maxBufferedDeleteTerms)
Determines the minimal number of delete terms required before the buffered in-memory delete terms are applied and flushed. |
void |
setMaxBufferedDocs(int maxBufferedDocs)
Determines the minimal number of documents required before the buffered in-memory documents are merged and a new Segment is created. |
void |
setMaxFieldLength(int maxFieldLength)
The maximum number of terms that will be indexed for a single field in a document. |
void |
setMaxMergeDocs(int maxMergeDocs)
Determines the largest number of documents ever merged by addDocument(). |
void |
setMergeFactor(int mergeFactor)
Determines how often segment indices are merged by addDocument(). |
void |
setSimilarity(Similarity similarity)
Expert: Set the Similarity implementation used by this IndexWriter. |
void |
setTermIndexInterval(int interval)
Expert: Set the interval between indexed terms. |
void |
setUseCompoundFile(boolean value)
Setting to turn on usage of a compound file. |
void |
setWriteLockTimeout(long writeLockTimeout)
Sets the maximum time to wait for a write lock (in milliseconds) for this instance of IndexWriter. |
void |
updateDocument(Term term,
Document doc)
Updates a document by first deleting the document(s) containing term and then adding the new
document. |
void |
updateDocument(Term term,
Document doc,
Analyzer analyzer)
Updates a document by first deleting the document(s) containing term and then adding the new
document. |
Methods inherited from class java.lang.Object |
clone, equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
public static long WRITE_LOCK_TIMEOUT
setDefaultWriteLockTimeout(long)
public static final String WRITE_LOCK_NAME
public static final int DEFAULT_MERGE_FACTOR
setMergeFactor(int)
.
public static final int DEFAULT_MAX_BUFFERED_DOCS
setMaxBufferedDocs(int)
.
public static final int DEFAULT_MAX_BUFFERED_DELETE_TERMS
setMaxBufferedDeleteTerms(int)
.
public static final int DEFAULT_MAX_MERGE_DOCS
Integer.MAX_VALUE
. Change using setMaxMergeDocs(int)
.
public static final int DEFAULT_MAX_FIELD_LENGTH
setMaxFieldLength(int)
.
public static final int DEFAULT_TERM_INDEX_INTERVAL
setTermIndexInterval(int)
.
Constructor Detail |
public IndexWriter(String path, Analyzer a, boolean create) throws IOException
path
.
Text will be analyzed with a
. If create
is true, then a new, empty index will be created in
path
, replacing the index already there, if any.
path
- the path to the index directorya
- the analyzer to usecreate
- true
to create the index or overwrite
the existing one; false
to append to the existing
index
IOException
- if the directory cannot be read/written to, or
if it does not exist, and create
is
false
public IndexWriter(File path, Analyzer a, boolean create) throws IOException
path
.
Text will be analyzed with a
. If create
is true, then a new, empty index will be created in
path
, replacing the index already there, if any.
path
- the path to the index directorya
- the analyzer to usecreate
- true
to create the index or overwrite
the existing one; false
to append to the existing
index
IOException
- if the directory cannot be read/written to, or
if it does not exist, and create
is
false
public IndexWriter(Directory d, Analyzer a, boolean create) throws IOException
d
.
Text will be analyzed with a
. If create
is true, then a new, empty index will be created in
d
, replacing the index already there, if any.
d
- the index directorya
- the analyzer to usecreate
- true
to create the index or overwrite
the existing one; false
to append to the existing
index
IOException
- if the directory cannot be read/written to, or
if it does not exist, and create
is
false
public IndexWriter(String path, Analyzer a) throws IOException
path
, creating it first if it does not
already exist, otherwise appending to the existing
index. Text will be analyzed with a
.
path
- the path to the index directorya
- the analyzer to use
IOException
- if the directory cannot be
created or read/written topublic IndexWriter(File path, Analyzer a) throws IOException
path
, creating it first if it does not
already exist, otherwise appending to the existing
index. Text will be analyzed with
a
.
path
- the path to the index directorya
- the analyzer to use
IOException
- if the directory cannot be
created or read/written topublic IndexWriter(Directory d, Analyzer a) throws IOException
d
, creating it first if it does not
already exist, otherwise appending to the existing
index. Text will be analyzed with a
.
d
- the index directorya
- the analyzer to use
IOException
- if the directory cannot be
created or read/written toMethod Detail |
public boolean getUseCompoundFile()
setUseCompoundFile(boolean)
public void setUseCompoundFile(boolean value)
public void setSimilarity(Similarity similarity)
Similarity.setDefault(Similarity)
public Similarity getSimilarity()
This defaults to the current value of Similarity.getDefault()
.
public void setTermIndexInterval(int interval)
numUniqueTerms/interval
terms are read into
memory by an IndexReader, and, on average, interval/2
terms
must be scanned for each random term access.
DEFAULT_TERM_INDEX_INTERVAL
public int getTermIndexInterval()
setTermIndexInterval(int)
public void setMaxMergeDocs(int maxMergeDocs)
The default value is Integer.MAX_VALUE
.
public int getMaxMergeDocs()
setMaxMergeDocs(int)
public void setMaxFieldLength(int maxFieldLength)
public int getMaxFieldLength()
setMaxFieldLength(int)
public void setMaxBufferedDocs(int maxBufferedDocs)
RAMDirectory
,
large value gives faster indexing. At the same time, mergeFactor limits
the number of files open in a FSDirectory.
The default value is 10.
IllegalArgumentException
- if maxBufferedDocs is smaller than 2public int getMaxBufferedDocs()
setMaxBufferedDocs(int)
public void setMaxBufferedDeleteTerms(int maxBufferedDeleteTerms)
Determines the minimal number of delete terms required before the buffered in-memory delete terms are applied and flushed. If there are documents buffered in memory at the time, they are merged and a new segment is created.
The default value is DEFAULT_MAX_BUFFERED_DELETE_TERMS
.
IllegalArgumentException
- if maxBufferedDeleteTerms is smaller than 1public int getMaxBufferedDeleteTerms()
setMaxBufferedDeleteTerms(int)
public void setMergeFactor(int mergeFactor)
This must never be less than 2. The default value is 10.
public int getMergeFactor()
setMergeFactor(int)
public void setInfoStream(PrintStream infoStream)
public PrintStream getInfoStream()
setInfoStream(java.io.PrintStream)
public void setWriteLockTimeout(long writeLockTimeout)
to change the default value for all instances of IndexWriter.
public long getWriteLockTimeout()
setWriteLockTimeout(long)
public static void setDefaultWriteLockTimeout(long writeLockTimeout)
public static long getDefaultWriteLockTimeout()
setDefaultWriteLockTimeout(long)
public void close() throws IOException
If an Exception is hit during close, eg due to disk full or some other reason, then both the on-disk index and the internal state of the IndexWriter instance will be consistent. However, the close will not be complete even though part of it (flushing buffered documents) may have succeeded, so the write lock will still be held.
If you can correct the underlying cause (eg free up some disk space) then you can call close() again. Failing that, if you want to force the write lock to be released (dangerous, because you may then lose buffered docs in the IndexWriter instance) then you can do something like this:
try { writer.close(); } finally { if (IndexReader.isLocked(directory)) { IndexReader.unlock(directory); } }after which, you must be certain not to use the writer instance anymore.
IOException
protected void finalize() throws Throwable
Throwable
public Directory getDirectory()
public Analyzer getAnalyzer()
public int docCount()
public void addDocument(Document doc) throws IOException
setMaxFieldLength(int)
terms for a given field, the remainder are
discarded.
Note that if an Exception is hit (for example disk full) then the index will be consistent, but this document may not have been added. Furthermore, it's possible the index will have one segment in non-compound format even when using compound files (when a merge has partially succeeded).
This method periodically flushes pending documents
to the Directory (every setMaxBufferedDocs(int)
),
and also periodically merges segments in the index
(every setMergeFactor(int)
flushes). When this
occurs, the method will take more time to run (possibly
a long time if the index is large), and will require
free temporary space in the Directory to do the
merging.
The amount of free space required when a merge is
triggered is up to 1X the size of all segments being
merged, when no readers/searchers are open against the
index, and up to 2X the size of all segments being
merged when readers/searchers are open against the
index (see optimize()
for details). Most
merges are small (merging the smallest segments
together), but whenever a full merge occurs (all
segments in the index, which is the worst case for
temporary space usage) then the maximum free disk space
required is the same as optimize()
.
IOException
public void addDocument(Document doc, Analyzer analyzer) throws IOException
getAnalyzer()
. If the document contains more than
setMaxFieldLength(int)
terms for a given field, the remainder are
discarded.
See addDocument(Document)
for details on
index and IndexWriter state after an Exception, and
flushing/merging temporary free space requirements.
IOException
public void deleteDocuments(Term term) throws IOException
term
.
term
- the term to identify the documents to be deleted
IOException
public void deleteDocuments(Term[] terms) throws IOException
terms
- array of terms to identify the documents
to be deleted
IOException
public void updateDocument(Term term, Document doc) throws IOException
term
and then adding the new
document. The delete and then add are atomic as seen
by a reader on the same index (flush may happen only after
the add).
term
- the term to identify the document(s) to be
deleteddoc
- the document to be added
IOException
public void updateDocument(Term term, Document doc, Analyzer analyzer) throws IOException
term
and then adding the new
document. The delete and then add are atomic as seen
by a reader on the same index (flush may happen only after
the add).
term
- the term to identify the document(s) to be
deleteddoc
- the document to be addedanalyzer
- the analyzer to use when analyzing the document
IOException
public void optimize() throws IOException
Note that this requires substantial temporary free space in the Directory (see LUCENE-764 for details):
If no readers/searchers are open against the index, then free space required is up to 1X the total size of the starting index. For example, if the starting index is 10 GB, then you must have up to 10 GB of free space before calling optimize.
If readers/searchers are using the index, then free space required is up to 2X the size of the starting index. This is because in addition to the 1X used by optimize, the original 1X of the starting index is still consuming space in the Directory as the readers are holding the segments files open. Even on Unix, where it will appear as if the files are gone ("ls" won't list them), they still consume storage due to "delete on last close" semantics.
Furthermore, if some but not all readers re-open while the optimize is underway, this will cause > 2X temporary space to be consumed as those new readers will then hold open the partially optimized segments at that time. It is best not to re-open readers while optimize is running.
The actual temporary usage could be much less than these figures (it depends on many factors).
Once the optimize completes, the total size of the index will be less than the size of the starting index. It could be quite a bit smaller (if there were many pending deletes) or just slightly smaller.
If an Exception is hit during optimize(), for example due to disk full, the index will not be corrupt and no documents will have been lost. However, it may have been partially optimized (some segments were merged but not all), and it's possible that one of the segments in the index will be in non-compound format even when using compound file format. This will occur when the Exception is hit during conversion of the segment into compound format.
IOException
public void addIndexes(Directory[] dirs) throws IOException
This may be used to parallelize batch indexing. A large document collection can be broken into sub-collections. Each sub-collection can be indexed in parallel, on a different thread, process or machine. The complete index can then be created by merging sub-collection indexes with this method.
After this completes, the index is optimized.
This method is transactional in how Exceptions are handled: it does not commit a new segments_N file until all indexes are added. This means if an Exception occurs (for example disk full), then either no indexes will have been added or they all will have been.
If an Exception is hit, it's still possible that all indexes were successfully added. This happens when the Exception is hit when trying to build a CFS file. In this case, one segment in the index will be in non-CFS format, even when using compound file format.
Also note that on an Exception, the index may still have been partially or fully optimized even though none of the input indexes were added.
Note that this requires temporary free space in the
Directory up to 2X the sum of all input indexes
(including the starting index). If readers/searchers
are open against the starting index, then temporary
free space required will be higher by the size of the
starting index (see optimize()
for details).
Once this completes, the final size of the index will be less than the sum of all input index sizes (including the starting index). It could be quite a bit smaller (if there were many pending deletes) or just slightly smaller.
See LUCENE-702 for details.
IOException
public void addIndexesNoOptimize(Directory[] dirs) throws IOException
This is similar to addIndexes(Directory[]). However, no optimize() is called either at the beginning or at the end. Instead, merges are carried out as necessary.
This requires this index not be among those to be added, and the upper bound* of those segment doc counts not exceed maxMergeDocs.
See addIndexes(Directory[])
for
details on transactional semantics, temporary free
space required in the Directory, and non-CFS segments
on an Exception.
IOException
public void addIndexes(IndexReader[] readers) throws IOException
After this completes, the index is optimized.
The provided IndexReaders are not closed.
See addIndexes(Directory[])
for
details on transactional semantics, temporary free
space required in the Directory, and non-CFS segments
on an Exception.
IOException
protected final void maybeFlushRamSegments() throws IOException
IOException
public final void flush() throws IOException
IOException
public final long ramSizeInBytes()
public final int numRamDocs()
|
|||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |