Cache mode storage

Introduction

Beginning from version 3.1.5 mnoGoSearch supports new words "cache" storage mode able to index and search quickly through several millions of documents.

Cache mode word indexes structure

The main idea of cache storage mode is that word index is stored on disk rather than SQL database. URL information (table "url") however is kept in SQL database. Word index is divided into 8192 files using 32 bit word_id built with CRC32 of the word. Index is located in files under /var/tree directory of mnoGoSearch installation.

Cache mode tools

There are two additional programs cached and splitter used in cache mode indexing.

cached is a TCP daemon which collects word information from indexers and stores it on your hard disk. It can operate in two modes, as old cachelogd daemon to logs data only, and in new mode, when cachelogd and splitter functionality are combined.

splitter is a program to create fast word indexes using data collected by cached. Those indexes are used later in search process.

Starting cache mode

To start "cache mode" follow these steps:

  1. Start cached server:

    cd /usr/local/mnogosearch/sbin

    ./cached & 2>cached.out

    It will write some debug information into cached.out file. cached also creates a cached.pid file in /var directory of base mnoGoSearch installation.

    cached listens to TCP connections and can accept several indexers from different machines. Theoretical number of indexers connections is equal to 128. In old mode cached stores information sent by indexers in /var/splitter/ directory of mnoGoSearch installation. In new mode it stores in /var/tree/ directory.

    By default, cached starts in new mode. To run it in old mode, i.e. logs only mode, run it with -l switch:

    cached -l

    Or by specify LogsOnly yes command in your cached.conf.

    You can specify port for cached to use without recompiling. In order to do that, please run

    ./cached -p8000

    where 8000 is the port number you choose.

    You can as well specify a directory to store data (it is /var directory by default) with this command:

    ./cached -w /path/to/var/dir

  2. Configure your indexer.conf as usual and for DBAddr command add cache as value of dbmode parameter and localhost:7000 as value of cached parameter (see the Section called DBAddr command in Chapter 3).

  3. Run indexers. Several indexers can be executed simultaneously. Note that you may install indexers on different machines and then execute them with the same cached server. This distributed system allows making indexing faster.

  4. Creating word index. This stage is no needs, if cached runs in new, i.e. combined, mode. When some information is gathered by indexers and collected in /var/splitter/ directory by cached it is possible to create fast word indexes. splitter program is responsible for this. It is installed in /sbin directory. Note that indexes can be created anytime without interrupting current indexing process.

    Indexes are to be created in the following two steps:

    1. Sending -HUP signal to cached. cached will flush all buffers to logs files on hard disk. You can use cached.pid file to do this:

      kill -HUP `cat /usr/local/mnogosearch/var/cached.pid`

    2. Building word index. Run splitter without any arguments:

      /usr/local/mnogosearch/sbin/splitter

      It will take sequentially all 4096 prepared files in /var/splitter/ directory and use them to build fast word index. Processed logs in /var/splitter/ directory are truncated after this operation.

Optional usage of several splitters

splitter has two command line arguments: -f [first file] -t [second file] which allows limiting used files range. If no parameters are specified splitter distributes all 4096 prepared files. You can limit files range using -f and -t keys specifying parameters in HEX notation. For example, splitter -f 000 -t A00 will create word indexes using files in the range from 000 to A00. These keys allow using several splitters at the same time. It usually gives more quick indexes building. For example, this shell script starts four splitters in background:


#!/bin/sh
splitter -f 000 -t 3f0 &
splitter -f 400 -t 7f0 &
splitter -f 800 -t bf0 &
splitter -f c00 -t ff0 &

Using run-splitter script

There is a run-splitter script in /sbin directory of mnoGoSearch installation. It helps to execute subsequently all three indexes building steps.

"run-splitter" has these two command line parameters:

run-splitter --hup --split

or a short version:

run-splitter -k -s

Each parameter activates corresponding indexes building step. run-splitter executes all three steps of index building in proper order:

  1. Sending -HUP signal to cached. --hup (or -k) run-splitter arguments are responsible for this.

  2. Running splitter. Keys --split (or -s).

In most cases just run "run-splitter" script with all -k -s arguments. Separate usage of those three flags which correspond to three steps of indexes building is rarely required.

Doing search

To start using search.cgi in the "cache mode" edit as usually your search.htm template and add this line: DBMode cache

Using search limits

To use search limits in cache mode, you should add appropriate Limit comand(s) to your cached.conf (or cached.conf, if cached is used) and to search.htm or searchd.conf (if searchd is used).

To use, for example, search limit by tag, by category and by site, add follow lines to search.htm or to indexer.conf (searchd.conf, if searchd is used).


Limit t:tag
Limit c:catategory
Limit site:siteid

where t - name of CGI parameter (&t=) for this constraint, tag - type of constraint.

Instead of tag/category/siteid in example above you can use any of values from table below:

Table 5-1. Cache limit types

categoryCategory limit.
tagTag limit.
timeTime limit.
hostnameHostname (url) limit.
languageLanguage limit.
contentContent-Type limit.
siteidurl.site_id limit.