indexer configuration

Specifying WEB space to be indexed

When indexer tries to insert a new URL into the database or is trying to index an existing one, it first of all checks whether this URL has corresponding Server, Realm or Subnet command given in indexer.conf. URLs without a corresponding Server, Realm or Subnet command are not indexed. By default those URLs which are already in the database and have no Server/Realm/Subnet commands will be deleted from the database. It may happen, for example, after removing some Server/Realm/Subnet commands from indexer.conf.

These commands have the following format:

Server [Method] [SubSection] <pattern> [alias]
Realm [Method] [CaseType] [MatchType] [CmpType] <pattern> [alias]
Subnet [Method] [MatchType] <pattern>

Mandatory parameter pattern specifies an URL, or its part, or pattern to compare.

Optional parameter method specifies a document action for this command. It may take any of these values: Allow, Disallow, HrefOnly, CheckOnly, Skip, CheckMP3, CheckMP3Only. By default, the value Allow is used.

  1. Allow

    Value Allow specifies that all corresponding documents will be indexed and scanned for new links. Depending on Content-Type, the appropriate external parser is executed if needed.

  2. Disallow

    Value Disallow specifies that all corresponding documents will be ignored and deleted from the database, if its was inserted there in the first place.

  3. HrefOnly

    Value HrefOnly specifies that all corresponding documents will only be scanned for new links (not indexed). This is useful, for example, for mail archives indexing, when index pages is only scanned to detect new messages for indexing.

Server HrefOnly Page
    Server Allow    Path

  4. CheckOnly

    Value CheckOnly specifies that all corresponding documents will be requested by the HTTP HEAD request, not HTTP GET, i.e. only brief info about documents (size, last modified, content type) will be fetched. This allows, for example, to check links on your site:

Server HrefOnly
    Realm  CheckOnly *

    These commands instruct indexer to scan all documents on site and collect all links. Brief info about every document outside will be requested by the HEAD method. After indexing is done, indexer -S command will show status for all documents from this site.

  5. Skip

    Value Skip specifies that all corresponding documents will be skipped while indexing. This is useful when you need to disable temporarily the reindexing of several sites, but to still be able to search them. These documents will be marked as expired.

  6. CheckMP3

    Value CheckMP3 specifies that the corresponding documents will be checked for MP3 tags along if its Content-Type is equal to audio/mpeg. This is useful, for example, if the remote server supplies application/octet-stream as Content-Type for MP3 files. If this tag is present, these files will be indexed as MP3 files, otherwise they will be processed according to Content-Type.

  7. CheckMP3Only

    This value is equal to CheckMP3, but if MP3 tag is not present, processing on Content-Type will not be taken.

Use the optional SubSection parameter to specify the server's checking behavior. The values for SubSection are the same than the "Follow" command arguments. The SubSection's value must be one of the following: page, path, site, world and has "path" value by default. If SubSection is not specified, current "Follow" value will be used. So, the only Server site http://localhost/ command and combination of Follow site and Server http://localhost/ have the same effect.

  1. path subsection

    When indexer seeks for a "Server" command corresponding to an URL, it checks that the discovered URL starts with URL given in the Server command's argument but without trailing file name. For example, if Server path http://localhost/path/to/index.html is given, all URLs which have http://localhost/path/to/ at the beginning correspond to this Server command.

    The following commands have the same effect except that they insert different URLs into the database:

Server path http://localhost/path/to/index.html
    Server path http://localhost/path/to/index
    Server path http://localhost/path/to/index.cgi?q=bla
    Server path http://localhost/path/to/index?q=bla

  2. site subsection

    indexer checks that the discovered URL have the same hostname than the URL given in Server command. For example, Server site http://localhost/path/to/a.html will allow the whole http://localhost/ server to be indexed.

  3. world subsection

    If world subsection is specified in Server command, it has the same effect that URL is considered to match this Server command. See explanation below.

  4. page subsection

    This subsection describes the only one URL given in Server argument.

  5. subsection in news:// schema

    Subsection is always considered as "site" for news:// URL schema. This is because news:// schema has no nested paths like ftp:// or http:// . Use Server news:// to index the whole news server or, for example, Server news:// to index all messages from "udm" hierarchy.

The optional parameter CaseType specifies the case sensitivity for string comparison, it can take one of the following values: case - case insensitive comparison, or nocase - case sensitive comparison.

The optional parameter CmpType specifies the comparison type and can take two values: Regex and String. String wildcards are the default match type. You can use ? and * signs in URLMask parameters, they mean "one character" and "any number of characters" respectively. For example, if you want to index all HTTP sites in the .ru domain, use this command:

Realm http://*.ru/*

Regex comparison type takes a regular expression as argument. Activate regex comparison type using Regex keyword. For example, you can describe everything in the .ru domain using the regex comparison type:

Realm Regex ^http://.*\.ru/

Optional parameter MatchType means match type. The possible values are Match and NoMatch with Match as default. Realm NoMatch has reverse effect. It means the URL that does not match a given pattern will correspond to this Realm command. For example, use this command to index everything without .com domain:

Realm NoMatch http://*.com/*

Optional alias argument provides very complicated URL rewrite, more powerful than other aliasing mechanism. Take a look the Section called Aliases at the alias argument usage explanation. Alias works only with the Regex comparison type and has no effect with the String type.

Server command

This is the main command of the indexer.conf file. It is used to add servers or their parts to be indexed. This command also says indexer to insert given URL into database at startup.

E.g. command Server http://localhost/ allows indexing the whole http://localhost/ server. You can also specify some path to index the server subsection: Server http://localhost/subsection/.

Note: You can suppress indexer behavior to add an URL given in Server command by using the -q indexer command-line argument. It is useful when you have hundreds or thousands Server commands and their URLs are already in database. This gives a faster indexer startup.

Realm command

Realm command is a more powerful mean of describing a web area to be indexed. It works almost like Server command but takes a regular expression or string wildcards as pattern parameter and do not insert any URL into the database for indexing.

Subnet command

Subnet command is another way to describe a web area to be indexed. It works almost like Server command but takes string wildcards as pattern arguments, which are compared against IP addresses instead of URLs. Argument may have * and ? signs, they mean "one character" and "any number of characters" respectively. For example, if you want to index all HTTP sites in your local subnet, use this command:

Subnet 192.168.*.*

You may use the "NoMatch" optional argument. For example, if you want to index everything without 195.x.x.x subnet, use:

Subnet NoMatch 195.*.*.*

Using different parameter for server and it's subsections

Indexer seeks for "Server" and "Realm" commands in order of their appearance. Thus, if you want to give different parameters to, e.g. whole server and its subsection, you should add the subsection line before the whole server's. Imagine that you have a server's subdirectory which contains news articles. Surely those articles are to be reindexed more often than the rest of the server. The following combination may be useful in such cases:

# Add subsection
Period 200000
Server http://servername/news/

# Add server
Period 600000
Server http://servername/

These commands give different reindexing periods for /news/ subdirectory comparing with the period of server as a whole. indexer will choose the first "Server" record for the http://servername/news/page1.html as far as it matches and was given first.

Default indexer behavior

The default behavior of indexer is to follow through links having correspondent Server/Realm command in the indexer.conf file. It also jumps between servers if both of them are present in indexer.conf either directly in Server command or indirectly in Realm command. For example, there are two Server commands:

Server http://www/
Server http://web/

When indexing http://www/page1.html indexer WILL follow the link http://web/page2.html if the last one has been found. Note that these pages are on different servers, but BOTH of them have the correspondent Server record.

If one of the Server command is deleted, indexer will remove all expired URLs from this server during the next reindexing.

Using indexer -f <filename>

The third scheme is very useful when running indexer -i -f url.txt. You may maintain required servers in the url.txt. When a new URL is added into url.txt indexer will index the server of this URL during next startup.


mnoGoSearch has an alias support, making it possible to index sites taking information from another location. For example, if you index your local web server, it is possible to take pages directly from disk without involving your web server in the indexing process. Another example is the building of a search engine for the primary site and using its mirror while indexing. There are several ways of using aliases.

Alias indexer.conf command

Format of "Alias" indexer.conf command:

Alias <masterURL> <mirrorURL>

E.g. you wish to index using the nearest German mirror Add these lines in your indexer.conf:


search.cgi will display URLs from the master site but indexer will take corresponding page from mirror site

Another example: If you want to index everything in domain and one of their servers, for example, is stored on local machine in /home/httpd/htdocs/ directory. These commands will be useful:

Realm http://*
Alias file:/home/httpd/htdocs/

Indexer will take from local disk and index other sites using HTTP.

Different aliases for server parts

Aliases are searched in the order of their appearance in indexer.conf. So, you can create different aliases for a server and its parts:

# First, create alias for example for /stat/ directory which
# is not under common location:
Alias  file:/usr/local/stat/htdocs/

# Then create alias for the rest of the server:
Alias file:/usr/local/apache/htdocs/

Note: If you change the order of these commands, the alias for the /stat/ directory will never be found.

Using alias in Server command

You may specify the location used by indexer as an optional argument for Server command:

Server  file:/home/httpd/htdocs/

Using alias in Realm command

Aliases in Realm command are a very powerful feature based on regular expressions. This feature implementation is similar to how PHP preg_replace() function works. Aliases in Realm command work only if "regex" match type is used. It DOES NOT work with "string" match type.

Use this syntax for Realm aliases:

Realm regex <URL_pattern> <alias_pattern>

Indexer searches URLs for matches to URL_pattern and builds an URL alias using alias_pattern. alias_pattern may contain references of the form $n, where n is a number in the range of 0-9. Every such reference will be replaced by text captured by the n'th parenthesized pattern. $0 refers to text matched by the whole pattern. Opening parentheses are counted from left to right (startingfrom 1) to obtain the number of the capturing subpattern.

Example: your company hosts several hundreds users with their domains in the form of Every user's site is stored on disk in "htdocs" under the user's home directory: /home/username/htdocs/.

You may write this command into indexer.conf (note that the dot '.' character has a special meaning in regular expressions and must be escaped with a '\' sign when dot is used in its usual meaning):

Realm regex (http://www\.)(.*)(\.yourname\.com/)(.*)  file:/home/$2/htdocs/$4

Imagine that indexer processes the page. It will build patterns from $0 to $4:

   $0 = '' (whole patter match)
   $1 = 'http://www.'      subpattern matches '(http://www\.)'
   $2 = 'john'             subpattern matches '(.*)'
   $3 = ''   subpattern matches '(\.yourname\.com/)'
   $4 = '/news/index.html' subpattern matches '(.*)'

Then indexer will compose alias using $2 and $4 patterns:


and will use the result as document location to fetch it.

AliasProg command

You may also specify "AliasProg" command for aliasing purposes. AliasProg is useful for major web hosting companies which want to index their web space taking documents directly from a disk without having to involve their web server in the indexing process. Documents layout may be very complex to describe using alias in Realm command. AliasProg is an external program that can be called, that takes a URL and returns one string with the appropriate alias to stdout. Use $1 to pass a URL to the command line.

For example this AliasProg command uses the 'replace' command from MySQL distribution and replaces URL substring with file:/usr/local/apache/htdocs/:

AliasProg  "echo $1 | /usr/local/mysql/bin/mysql/replace file:/usr/local/apache/htdocs/"

You may also write your own very complex program to process URLs.

ReverseAlias command

The indexer.conf ReverseAlias command allows URL mapping before a URL is inserted into the database. Unlike the Alias command, that triggers mapping right before a document is downloaded, ReverseAlias command triggers mapping after the link is found.

ReverseAlias http://name2/

All links with a short server name will be mapped to links with a full server name before they are inserted into the database.

One possible use is cutting various unnecessary strings like PHPSESSID=XXXX

E.g. cutting from URL like http://www/a.php?PHPSESSID=XXX, when PHPSESSID is the only parameter. The question sign is deleted as well:

ReverseAlias regex  (http://[^?]*)[?]PHPSESSID=[^&]*$          $1$2

Cutting from URL like w/a.php?PHPSESSID=xxx&.., i.e. when PHPSESSID is the first parameter, but there are other parameters following it. The '&' sign after PHPSESSID is deleted as well. The question mark (?) character is not deleted:

ReverseAlias regex  (http://[^?]*[?])PHPSESSID=[^&]*&(.*)      $1$2

Cutting from URL like http://www/a.php?a=b&PHPSESSID=xxx or http://www/a.php?a=b&PHPSESSID=xxx&c=d, where PHPSESSID is not the first parameter. The '&' sign before PHPSESSID is deleted:

ReverseAlias regex  (http://.*)&PHPSESSION=[^&]*(.*)         $1$2

Alias in search.htm search template

It is also possible to define aliases in the search template (search.htm). The Alias command in search.htm is identical to the one in indexer.conf, however it is active during searching, not indexation.

The syntax of the search.htm Alias command is the same as in indexer.conf:

Alias <find-prefix> <replace-prefix>

For example, there is the following command in search.htm:

Alias http://localhost/

Search returns a page with the following URL:


As a result, the $(DU) variable will be replace NOT with this URL:


but with the following URL (that results in processing with Alias):


Since version 3.2.7 mnoGoSearch has "ServerTable" indexer.conf command.

Loading servers table

When ServerTable mysql://user:pass@host/dbname/tablename[?srvinfo=infotablename] is specified, indexer will load the servers information from the given tablename SQL table, and will load the servers parameters from the given infotablename SQL table. If the srvinfo parameter is not specified, the parameters will be loaded from the srvinfo table. Check the structure of server and srvinfo tables in create/mysql/create.txt file. If there is no structure example for your database, take it as an example. Please send us the structure of your database at !

You may use several ServerTable commands to load the server's information from different tables.

Server table structure

Server table consists of all the necessary fields which describe the servers parameters. Field names have correspondent indexer.conf commands. For example, to the "period" field corresponds the indexer.conf "Period" command. Default field values are the same than default indexer.conf parameters.

The "gindex" field corresponds to the "Index" command. The name has been slightly changed to avoid SQL reserved word usage.

For the description of several fields see the Section called Database schema in Chapter 9.

Note: The servers are only read from the table where "active" field has a 1 value and "parent" field has a 0 value. This is useful to allow users to submit new URLs into servers table and give administrator a possibility to approve added URLs.


Flush server sets active field to inactive for all ServerTable records. Use this command to deactivate all commands in ServerTable before loading new commands from indexer.conf or from other ServerTable.

External parsers

mnoGoSearch indexer can use external parsers to index various file types (mime types).

A parser is an executable program which converts one of the mime types to text/plain or text/html. For example, if you have some postscript files, you can use the ps2ascii parser (filter), which reads postscript file from stdin and produces ascii to stdout, to be able to index their contents.

Supported parser types

Indexer supports four types of parsers that can:

  • read data from stdin and send the result to stdout

  • read data from file and send the result to stdout

  • read data from file and send the result to file

  • read data from stdin and send the result to file

Setting up parsers

  1. Configure mime types

    Configure your web server to send appropriate "Content-Type" header. For Apache, have a look at mime.types file, most mime types are already defined there.

    If you want to index local files or via ftp use "AddType" command in indexer.conf to associate file name extensions with their mime types. For example:

AddType text/html *.html

  2. Add parsers

    Add lines with parsers definitions. Lines have the following format with three arguments:

Mime <from_mime> <to_mime> <command line>

    For example, the following line defines the parser for man pages:

# Use deroff for parsing man pages ( *.man )
    Mime  application/x-troff-man   text/plain   deroff

    This parser will take data from stdin and output result to stdout.

    Many parsers can not operate on stdin and require a file to read from. In this case indexer creates a temporary file in /tmp and removes it when the parser is done. Use $1 macro in parser command line to substitute file name. For example, Mime command for "catdoc" MS Word to ASCII converters may look like this:

Mime application/msword text/plain "/usr/bin/catdoc -a $1"

    If your parser writes the result into an output file, use $2 macro. indexer will replace $2 by a temporary file name, start the parser, read the result from this temporary file, then remove it. For example:

Mime application/msword text/plain "/usr/bin/catdoc -a $1 >$2"

    The parser above will read data from first temporary file and write result to second one. Both temporary files will be removed when parser exits. Note that the result of using this parser will be absolutely the same than the previous one, but they use different execution modes: file->stdout and file->file correspondingly.

Avoid indexer hang on parser execution

To prevent the indexer to hang on parser's execution, you may specify an amount of time in seconds for parser's execution in your indexer.conf by using the ParserTimeOut command. For example:

ParserTimeOut 600

Default value is 300 seconds, i.e. 5 minutes.

Pipes in parser's command line

You can use pipes in parser's command line. For example, these lines will be useful to index gzipped man pages from local disk:

AddType  application/x-gzipped-man  *.1.gz *.2.gz *.3.gz *.4.gz
Mime     application/x-gzipped-man  text/plain  "zcat | deroff"

Charsets and parsers

Some parsers can produce output in any other charset than given in the LocalCharset command. Specify a charset to make the indexer convert parser's output to a proper one. For example, if your catdoc is configured to produce output in windows-1251 charset but LocalCharset is koi8-r, use this command for parsing MS Word documents:

Mime  application/msword  "text/plain; charset=windows-1251" "catdoc -a $1"

UDM_URL variable

When executing a parser, the indexer creates the UDM_URL environment variable with an URL being processed as a value. You can use this variable in parser scripts.

Note: When running several threads, don't relay on the UDM_URL variable, use ${URL} variable in the parser command line instead. See Mime for more details.

Some third-party parsers

  • RPM parser by Mario Lang


    /usr/bin/rpm -q --queryformat="<html><head><title>RPM: %{NAME} %{VERSION}-%{RELEASE}
    (%{GROUP})</title><meta name=\"description\" content=\"%{SUMMARY}\"></head><body>
    %{DESCRIPTION}\n</body></html>" -p $1


Mime application/x-rpm text/html "/usr/local/bin/rpminfo $1"

    It renders to such nice RPM information:

3. RPM: mysql 3.20.32a-3 (Applications/Databases) [4]
           Mysql is a SQL (Structured Query Language) database server.
           Mysql was written by Michael (Monty) Widenius. See the CREDITS
           file in the distribution for more credits for mysql and related
           (application/x-rpm) 2088855 bytes

  • catdoc MS Word to text converter

    Home page, also listed on Freshmeat

    Mime application/msword         text/plain      "catdoc $1"

  • xls2csv MS Excel to text converter

    It is supplied with catdoc.

    Mime application/   text/plain      "xls2csv $1"

  • pdftotext Adobe PDF converter

    Supplied with xpdf project.

    Homepage, also listed on Freshmeat

    Mime application/pdf            text/plain      "pdftotext $1 -"

  • unrtf RTF to html converter


    Mime text/rtf*        text/html                 "/usr/local/mnogosearch/sbin/unrtf --html $1
    Mime application/rtf        text/html           "/usr/local/mnogosearch/sbin/unrtf --html $1

Please feel free to contribute your scripts and parsers configuration to .