mnoGoSearch 3.2 reference manual: Full-featured search engine software | ||
---|---|---|
Prev | Chapter 5. Storing mnoGoSearch data | Next |
Beginning from version 3.1.5 mnoGoSearch supports new words "cache" storage mode able to index and search quickly through several millions of documents.
The main idea of cache storage mode is that word index is stored on disk rather than SQL database. URL information (table "url") however is kept in SQL database. Word index is divided into 8192 files using 32 bit word_id built with CRC32 of the word. Index is located in files under /var/tree directory of mnoGoSearch installation.
There are tree additional programs cached, splitter and mkind used in "cache mode" indexing.
cached is a TCP daemon which collects word information from indexers and stores it on your hard disk. It can operate in two modes, as old cachelogd daemon to logs data only, and in new mode, when cachelogd and splitter functionality are combined.
splitter is a program to create fast word indexes using data collected by cached. Those indexes are used later in search process.
mkind is a tool to create search limits by tags, category, etc.
To start "cache mode" follow these steps:
Start cached server:
cd /usr/local/mnogosearch/sbin
./cached & 2>cached.out
It will write some debug information into cached.out file. cached also creates a cached.pid file in /var directory of base mnoGoSearch installation.
cached listens to TCP connections and can accept several indexers from different machines. Theoretical number of indexers connections is equal to 128. In old mode cached stores information sent by indexers in /var/splitter/ directory of mnoGoSearch installation. In new mode it stores in /var/tree/ directory.
By default, cached starts in new mode. To run it in old mode, i.e. logs only mode, run it with -l switch:
cached -l
Or by specify LogsOnly yes command in your indexer.conf.
You can specify port for cached to use without recompiling. In order to do that, please run
./cached -p8000
where 8000 is the port number you choose.
You can as well specify a directory to store data (it is /var directory by default) with this command:
./cached -w /path/to/var/dir
Configure your indexer.conf as usual and add these two lines:
DBMode cache LogdAddr localhost:7000
LogdAddr command is used to specify cached location. Each indexer will connect to cached on given address at startup.
Run indexers. Several indexers can be executed simultaneously. Note that you may install indexers on different machines and then execute them with the same cached server. This distributed system allows making indexing faster.
Creating word index. This stage is no needs, if cached runs in new, i.e. combined, mode. When some information is gathered by indexers and collected in /var/splitter/ directory by cached it is possible to create fast word indexes. splitter program is responsible for this. It is installed in /sbin directory. Note that indexes can be created anytime without interrupting current indexing process.
Indexes are to be created in the following two steps:
Sending -HUP signal to cached. cached will flush all buffers to logs files on hard disk. You can use cached.pid file to do this:
kill -HUP `cat /usr/local/mnogosearch/var/cached.pid`
Building word index. Run splitter without any arguments:
/usr/local/mnogosearch/sbin/splitter
It will take sequentially all 4096 prepared files in /var/splitter/ directory and use them to build fast word index. Processed logs in /var/splitter/ directory are truncated after this operation.
splitter has two command line arguments: -f [first file] -t [second file] which allows limiting used files range. If no parameters are specified splitter distributes all 4096 prepared files. You can limit files range using -f and -t keys specifying parameters in HEX notation. For example, splitter -f 000 -t A00 will create word indexes using files in the range from 000 to A00. These keys allow using several splitters at the same time. It usually gives more quick indexes building. For example, this shell script starts four splitters in background:
#!/bin/sh splitter -f 000 -t 3f0 & splitter -f 400 -t 7f0 & splitter -f 800 -t bf0 & splitter -f c00 -t ff0 &
There is a run-splitter script in /sbin directory of mnoGoSearch installation. It helps to execute subsequently all three indexes building steps.
"run-splitter" has these two command line parameters:
run-splitter --hup --split
or a short version:
run-splitter -k -s
Each parameter activates corresponding indexes building step. run-splitter executes all three steps of index building in proper order:
Sending -HUP signal to cached. --hup (or -k) run-splitter arguments are responsible for this.
Running splitter. Keys --split (or -s).
In most cases just run "run-splitter" script with all -k -s arguments. Separate usage of those three flags which correspond to three steps of indexes building is rarely required.
To start using search.cgi in the "cache mode" edit as usually your search.htm template and add this line: DBMode cache
To use search limits in cache mode, you should add appropriate Limit comand(s) to your indexer.conf and to search.htm or searchd.conf (if searchd is used).
To use, for example, search limit by tag, by category and by site, add follow lines to search.htm or to searchd.conf, if searchd is used.
Limit t:tag Limit c:catategory Limit site:siteid
where t - name of CGI parameter (&t=) for this constraint, tag - type of constraint.
Instead of tag/category/siteid in example above you can use any of values from table below: