Frequently Asked Questions

ht://Dig Copyright © 1995-1999 The ht://Dig Group
Please see the file COPYING for license information.


This FAQ is compiled by Geoff Hutchison <ghutchis@wso.williams.edu> and the most recent version is available at <http://www.htdig.org/FAQ.html>. Questions (and answers!) are greatly appreciated.

Questions

1. General

1.1. Can I search the internet with ht://Dig?
1.2. Can I index the internet with ht://Dig?
1.3. What's the difference between htdig and ht://Dig?
1.4. I sent mail to Andrew but I never got a response!
1.5. I sent a question to the mailing list but I never got a response!
1.6. I have a great idea/patch for ht://Dig!
1.7. Is ht://Dig Y2K compliant?
1.8. I think I found a bug. What should I do?
1.9. Does ht://Dig support phrase or near matching?
1.10. What are the practical and/or theoretical limits of ht://Dig?

2. Getting ht://Dig

2.1. What's the latest version of ht://Dig?
2.2. Are there binary distributions of ht://Dig?
2.3. Are there mirror sites for ht://Dig?
2.4. Is ht://Dig available by ftp?
2.5. Are patches around to upgrade between versions?

3. Compiling

3.1. When I compile ht://Dig I get an error about libht.a.
3.2. I get an error about -lg
3.3. I'm compiling on Digital Unix and I get mesages about "unresolved" and "db_open."

4. Configuration

4.1. How come I can't index my site?
4.2. How can I change the output format of htsearch?
4.3. How do I index pages that start with '~'?
4.4. Can I use multiple databases?
4.5. OK, I can use multiple databases. Can I merge them into one?
4.6. Wow, ht://Dig eats up a lot of disk space. How can I cut down?
4.7. Can I use SSI or other CGIs in my htsearch results?
4.8. How do I index Word or PostScript documents?
4.9. How do I index PDF files without acroread?
4.10. How do I index documents in other languages?

5. Troubleshooting

5.1. I can't seem to index more than X documents in a directory.
5.2. I can't index PDF files.
5.3. When I run "rundig," I get a message about "DATABASE_DIR" not being found.
5.4. When I run htmerge, it stops with an "out of diskspace" message.
5.5. I have problems running rundig from cron under Linux.
5.6. When I run htmerge, it stops with an "Unexpected file type" message.
5.7. When I run htsearch, I get lots of Internal Server Errors (#500).
5.8. I'm having problems with indexing words with accented characters.
5.9. When I run htmerge, it stops with a "Word sort failed" message.
5.10. When htsearch has a lot of matches, it runs extremely slowly.

Answers

1. General

1.1. Can I search the internet with ht://Dig?

No, ht://Dig is a system for indexing and searching a small set of sites or intranet. It is not meant to replace any of the many internet-wide search engines.

1.2. Can I index the internet with ht://Dig?

No, as above, ht://Dig is not meant as an internet-wide search engine. While there is theoretically nothing to stop you from indexing as much as you wish, practical considerations (e.g. time, disk space, memory, etc.) will limit this.

1.3. What's the difference between htdig and ht://Dig?

The complete ht://Dig package consists of several programs, one of which is called "htdig." This program performs the "digging" or indexing of the web pages. Of course an index doesn't do you much good without a program to sort it, search through it, etc.

1.4. I sent mail to Andrew but I never got a response!

Andrew no longer does much work on ht://Dig. He has started a company, called Contigo Software and is quite busy with that. To contact any of the current developers, send mail to <htdig3-dev@htdig.org>

1.5. I sent a question to the mailing list but I never got a response!

Development of ht://Dig is done by volunteers. Since we all have other jobs, it make take a while before someone gets back to you.

1.6. I have a great idea/patch for ht://Dig!

Great! Development of ht://Dig continues through suggestions and improvements from users. If you have an idea (or even better, a patch), please send it to the ht://Dig mailing list so others can use it. For suggestions on how to submit patches, please check the Guidelines for Patch Submissions. If you'd like to make a feature request, you can do so through the ht://Dig bug database, either off of <www.htdig.org> or by sending mail to <bugs@htdig.org>

1.7. Is ht://Dig Y2K compliant?

ht://Dig should be y2k compliant since it never stores dates as two-digit years. Under ht://Dig's copyright (GPL), there is no warranty whatsoever as permitted by law. If you would like an iron-clad, legally-binding guarantee, feel free to check the source code itself. If you discover something, please let us know!

1.8. I think I found a bug. What should I do?

Well, there are probably bugs out there. You have two options for bug-reporting. You can either mail the ht://Dig mailing list at <htdig@htdig.org> or better yet, report it to the bug database, which ensures it won't become lost amongst all of the other mail on the list. To do this, either follow the link from <www.htdig.org> or by sending mail to <bugs@htdig.org>. Please try to include as much information as possible, including the version of ht://Dig, the OS, and anything else that might be helpful. Often, running the programs with one "-v" or more (e.g. "-vvv") gives useful debugging information.

1.9. Does ht://Dig support phrase or near matching?

Not at the moment. This will, of course, eventually be added. But with volunteers currently doing the development and no "best" way to implement this in the current code, it may take some time before this happens. Feel free to send suggestions to <bugs@htdig.org>.

1.10. What are the practical and/or theoretical limits of ht://Dig?

The code itself doesn't put any real limit on the number of pages. There are several sites in the hundreds of thousands of pages. As for practical limits, it depends a lot on how many pages you plan on indexing. Some operating systems limit files to 2 GB in size, which can become a problem with a large database. There are also slightly different limits to each of the programs. Right now htmerge performs a sort on the words indexed. Most sort programs use a fair amount of RAM and temporary disk space as they assemble the sorted list. The htdig program stores a fair amount of information about the URLs it visits, in part to only index a page once. This takes a fair amount of RAM. With cheap RAM, it never hurts to throw more memory at indexing larger sites. In a pinch, swap will work, but it obviously really slows things down.


2. Getting ht://Dig

2.1. What's the latest version of ht://Dig?

The latest version is 3.1.2 as of this writing. Development is beginning on htdig4 as well as a few interim releases of htdig3.

2.2. Are there binary distributions of ht://Dig?

We're trying to get consistent binary distributions for popular platforms. Contributed binary releases will go in http://www.htdig.org/files/binaries/ and contributions may be placed in ftp://ftp.htdig.org/incoming/. Anyone who would like to make consistent binary distributions of ht://Dig at least should signup to the htdig3-announce mailing list.

2.3. Are there mirror sites for ht://Dig?

Not at the moment. Currently, there is only the main server at <www.htdig.org>. If you'd be willing to mirror the site, please contact <htdig3-dev@htdig.org>

2.4. Is ht://Dig available by ftp?

Yes. You can find the current versions and several older versions at <ftp.htdig.org>.

2.5. Are patches around to upgrade between versions?

Most versions are also distributed as a patch to the previous version's source code. The most recent exception to this was version 3.1.0b1. Since this version switched from the GDBM database to DB2, the new database package needed to be shipped with the distribution. This made the potential patch almost as large as the regular distribution. Update patches resumed with version 3.1.0b2.


3. Compiling

3.1. When I compile ht://Dig I get an error about libht.a

This usually indicates that either libstdc++ is not installed or is installed incorrectly. To get libstdc++ or any other GNU too, check ftp://ftp.gnu.org/pub/gnu/

3.2. I get an error about -lg

This is due to a bug in the Makefile.config.in of version 3.1.0b1. Remove all flags "-ggdb" in Makefile.config.in. Then type "./config.status" to rebuild the Makefiles and recompile. This bug is fixed in version 3.1.0b2.

3.3. I'm compiling on Digital Unix and I get mesages about "unresolved" and "db_open."

Answer contributed by George Adams <learningapache@my-dejanews.com>

What you're seeing are problems related to the Berkeley DB library. htdig needs a fairly modern version of db, which is why it ships with one that works. (see that -L../db-2.4.14/dist line? That's where htdig's db library is).
The solution is to modify the c++ command so it explicity references the correct libdb.a . You can do this by replacing the "-ldb" directive in the c++ command with "../db-2.4.14/dist/libdb.a" This problem has been resolved as of version 3.1.0.


4. Configuration

4.1. How come I can't index my site?

There are a variety of reasons ht://Dig won't index a site. To get to the bottom of things, it's advisable to turn on some debugging output from the htdig program. When running from the command-line, try "-vvv" in addition to any other flags. This will add debugging output, including the responses from the server.

4.2. How can I change the output format of htsearch?

Answer contributed by: Malka Cymbalista <vumalki@ultra1.weizmann.ac.il>

You can change the output format of htsearch by creating different header, footer and result files that specify how you want the output to look. You then create a configuration file that specifies which files to use. In the html document that links to the search, you specify which configuration file to use.

So the configuration file would have the lines:

search_results_header: ${common_dir}/ccheader.html
search_results_footer: ${common_dir}/ccfooter.html
template_map:  Long long builtin-long \
               Short short builtin-short \
               Default default ${common_dir}/ccresult.html
template_name: Default
You would also put into the configuration file any other lines from the default configuration file that apply to htsearch.

The files ${common_dir}/ccheader.html and ${common_dir}/ccfooter.html and ${common_dir}/ccresult.html would be tailored to give the output in the desired format.

Assuming your configuration file is called cc.conf, the html file that links to the search has to set the config parameter equal to cc. The following line would do it:

<input type=hidden name=config value="cc">

4.3. How do I index pages that start with '~'?

ht://Dig should index pages starting with '~' as if it was another web browser. If you are having problems with this, check your server log files to see what file the server is attempting to return.

4.4. Can I use multiple databases?

Yes, though you may find it easier to have one larger database and use restrict or exclude fields on searches. To use multiple databases, you will need a config file for each database. Then each file will set the "database_base" option to change the name of the databases.

4.5. OK, I can use multiple databases. Can I merge them into one?

As of version 3.1.0, you can do this with the -m option to htmerge.

4.6. Wow, ht://Dig eats up a lot of disk space. How can I cut down?

There are several ways to cut down on disk space. One is not to use the "-a" option, which creates work copies of the databases. Naturally this essentially doubles the disk usage. Changing configuration variables can also help cut down on disk usage. Decreasing max_head_length and max_meta_description_length will cut down on the size of the excerpts stored (in fact, if you don't have use_meta_description set, you can set max_meta_description_length to 0!). Other techniques include removing the db.wordlist file and adding more words to the bad_words file.

4.7. Can I use SSI or other CGIs in my htsearch results?

Yes, but not directly. The htsearch CGI does not understand SSI markup and thus cannot include other CGIs. However, you can easily write a "wrapper" CGI or other server-parsed file that includes the htsearch results. For perl script examples, see the files in contrib/ewswrap. For PHP examples, see PHP guide in the contributed guides.

4.8. How do I index Word or PostScript documents?

This must be done with an external parser. A sample of such a parser is the contrib/parse_doc.pl Perl script. It will parse Word, PostScript and PDF documents, when used with the appropriate document to text converter. It uses catdoc to parse Word documents, and ps2ascii to parse PostScript files. The comments in the Perl script indicate where you can obtain these converters.
See below for an example.

4.9. How do I index PDF files without acroread?

This too can be done with an external parser, in combination with the xpdf 0.80 package. A sample of such a parser is the contrib/parse_doc.pl Perl script. It uses pdftotext, which is part of the xpdf package, to parse PDF documents. The comments in the Perl script indicate where you can obtain xpdf.

For example, you could put this in your configuration file:

external_parsers: application/msword /usr/local/bin/parse_doc.pl \
                  application/postscript /usr/local/bin/parse_doc.pl \
                  application/pdf /usr/local/bin/parse_doc.pl
You would also need to configure the script to indicate where all of the document to text converters are installed.

4.10. How do I index documents in other languages?

The first and most important thing you must do, to allow ht://Dig to properly support international characters, is to define the correct locale for the language and country you wish to support. This is done by setting the locale attribute (see question 5.8. The next step is to configure ht://Dig to use dictionary and affix files for the language of your choice. These can be the same dictionary and affix files as are used by the ispell software. A collection of these is available from Geoff Kuenning's International Ispell Dictionaries page.

For example, if you install German dictionaries in common/german, you could use these lines in your configuration file:

locale:               de_DE
lang_dir:             ${common_dir}/german
bad_word_list:        ${lang_dir}/bad_words
endings_affix_file:   ${lang_dir}/german.aff
endings_dictionary:   ${lang_dir}/german.0
endings_root2word_db: ${lang_dir}/root2word.db
endings_word2root_db: ${lang_dir}/word2root.db
You can build the endings database with htfuzzy endings. (This command may actually take days to complete, for releases older than 3.1.2. Current releases use faster regular expression matching, which will speed this up by a few orders of magnitude.) You will also need to redefine the synonyms file if you wish to use the synonyms search algorithm. This file is not included with most of the dictionaries, nor is the bad_words file. Current versions of ht://Dig only support 8-bit characters, so languages such as Chinese and Japanese, which require 16-bit characters, are not currently supported.


5. Troubleshooting

5.1. I can't seem to index more than X documents in a directory.

This usually has to do with the default document size limit. If you set "max_doc_size" in your config file to something enough to read in the directory index (try 100000 for 100K) this should fix this problem. Of course this will require more memory to read the larger file.

5.2. I can't index PDF files.

As above, this usually has to do with the default document size. What happens is ht://Dig will read in part of a PDF file and try to index it. This usually fails. Try setting "max_doc_size" in your config file to a larger value than your largest PDF file.

Another common problem is that htdig can't find the acroread program, which it uses to convert PDF files to PostScript. The solution is to obtain and install Adobe Acrobat Reader 3.0, if it's available for your system. An alternative is to use an external parser with the xpdf 0.80 package installed on your system, as described in question 4.9 above.

5.3. When I run "rundig," I get a message about "DATABASE_DIR" not being found.

This is due to a bug in the Makefile.in file in version 3.1.0b1. The easiest fix is to edit the rundig file and change the line "TMPDIR=@DATABASE_DIR@" to set TMPDIR to a directory with a large amount of temporary disk space for htmerge. This bug is fixed in version 3.1.0b2.

5.4. When I run htmerge, it stops with an "out of diskspace" message.

This means that htmerge has run out of temporary disk space for sorting. Either in your "rundig" script (if you run htmerge through that) or before you run htmerge, set the variable TMPDIR to a temp directory with lots of space.

5.5. I have problems running rundig from cron under Linux.

This problem commonly occurs on Red Hat Linux 5.0 and 5.1, because of a bug in vixie-cron. It causes htmerge to fail with a "Word sort failed" error. It's fixed in Red Hat 5.2. You can install vixie-cron-3.0.1-26.{arch}.rpm from a 5.2 distribution to fix the problem on 5.0 or 5.1. A quick fix for the problem is to change the first line of rundig to "#!/bin/ash" which will run the script through the ash shell, but this doesn't solve the underlying problem.

5.6. When I run htmerge, it stops with an "Unexpected file type" message.

Often this is because the databases are corrupt. Try removing them and rebuilding. If this doesn't work, some have found that the solution for question 3.2 works for this as well. This should be fixed in version 3.1.0b2.

5.7. When I run htsearch, I get lots of Internal Server Errors (#500).

Answer contributed by David R. Barstis <dbarstis@nd.edu>

If you are running Apache under Solaris, try adding "PassEnv LD_LIBRARY_PATH" to Apache's httpd.conf file. Often these errors can be caused by insufficient memory, so if you often run memory intensive programs (including htdig and htmerge themselves!) htsearch may run out of memory.

5.8. I'm having problems with indexing words with accented characters.

Most of the time, this is caused by either not setting or incorrectly setting the locale attribute. The default locale for most systems is the "portable" locale, which strips everything down to standard ASCII. Most systems expect something like locale: en_US or locale: fr_FR. Locale files are often found in /usr/share/locale or the $LANGUAGE environment variable. See also question 4.10.

5.9. When I run htmerge, it stops with a "Word sort failed" message.

There are three common causes of this. First of all, the sort program may be running out of temporary file space. Fix this by freeing up some space where sort puts its temporary files, or change the setting of the TMPDIR environment variable to a directory on a volume with more space. A second common problem is on systems with a BSD version of the sort program (such as FreeBSD or NetBSD). This program uses the -T option as a record separator rather than an alternate temporary directory. On these systems, you must remove the TMPDIR environment variable from rundig, or change the code in htmerge/words.cc not to use the -T option. A third cause is the cron program on Red Hat Linux 5.0 or 5.1. (See question 5.5 above.)

5.10. When htsearch has a lot of matches, it runs extremely slowly.

When you run htsearch with no customization, on a large database, and it gets a lot of hits, it tends to take a long time to process those hits. Some users with large databases have reported much higher performance, for searches that yield lots of hits, by setting the backlink_factor attribute in htdig.conf to 0, and sorting by score. The scores calculated this way aren't quite as good, but htsearch can process hits much faster when it doesn't need to look up the db.docdb record for each hit, just to get the backlink count, date or title, either for scoring or for sorting. This affects versions 3.1.0b3 and up. In version 3.2, currently under development, the databases will be structured differently, so it should perform searches more quickly.


$Author: ghutchis $
Last modified: Fri Apr 16 11:30:13 EDT 1999