The Mercury User's Guide

Copyright (C) 1995-1999 The University of Melbourne.

Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.

Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.

Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions.

Introduction

This document describes the compilation environment of Mercury. It describes how to use `mmc', the Mercury compiler; how to use `mmake', the "Mercury make" program, a tool built on top of ordinary or GNU make to simplify the handling of Mercury programs; how to use `mdb', the Mercury debugger; and how to use `mprof', the Mercury profiler.

We strongly recommend that programmers use `mmake' rather than invoking `mmc' directly, because `mmake' is generally easier to use and avoids unnecessary recompilation.

File naming conventions

Mercury source files must be named `*.m'. Each Mercury source file should contain a single Mercury module whose module name should be the same as the filename without the `.m' extension.

The Mercury implementation uses a variety of intermediate files, which are described below. But all you really need to know is how to name source files. For historical reasons, the default behaviour is for intermediate files to be created in the current directory, but if you use the `--use-subdirs' option to `mmc' or `mmake', all these intermediate files will be created in a `Mercury' subdirectory, where you can happily ignore them. Thus you may wish to skip the rest of this chapter.

In cases where the source file name and module name don't match, the names for intermediate files are based on the name of the module from which they are derived, not on the source file name.

Files ending in `.int', `.int0', `.int2' and `.int3' are interface files; these are generated automatically by the compiler, using the `--make-interface' (or `--make-int'), `--make-private-interface' (or `--make-priv-int'), `--make-short-interface' (or `--make-short-int') options. Files ending in `.opt' are interface files used in inter-module optimization, and are created using the `--make-optimization-interface' (or `--make-opt-int') option. Similarly, files ending in `.trans_opt' are interface files used in transitive inter-module optimization, and are created using the `--make-transitive-optimization-interface' (or `--make-trans-opt-int') option.

Since the interface of a module changes less often than its implementation, the `.int', `.int0', `.int2', `.int3', `.opt', and `.trans_opt' files will remain unchanged on many compilations. To avoid unnecessary recompilations of the clients of the module, the timestamps on the these files are updated only if their contents change. `.date', `.date0', `.date3', `.optdate', and `.trans_opt_date' files associated with the module are used as date stamps; they are used when deciding whether the interface files need to be regenerated.

Files ending in `.d' are automatically-generated Makefile fragments which contain the dependencies for a module. Files ending in `.dep' are automatically-generated Makefile fragments which contain the rules for an entire program. Files ending in `.dv' are automatically-generated Makefile fragments which contain variable definitions for an entire program.

As usual, `.c' files are C source code, `.h' files are C header files, `.o' files are object code, In addition, `.pic_o' files are object code files that contain position-independent code (PIC).

Using the Mercury compiler

Following a long Unix tradition, the Mercury compiler is called `mmc' (for "Melbourne Mercury Compiler"). Some of its options (e.g. `-c', `-o', and `-I') have a similar meaning to that in other Unix compilers.

Arguments to `mmc' may be either file names (ending in `.m'), or module names, with `.' (rather than `__' or `:') as the module qualifier. For a module name such as `foo.bar.baz', the compiler will look for the source in files `foo.bar.baz.m', `bar.baz.m', and `baz.m', in that order. Note that if the file name does not include all the module qualifiers (e.g. if it is `bar.baz.m' or `baz.m' rather than `foo.bar.baz.m'), then the module name in the `:- module' declaration for that module must be fully qualified.

To compile a program which consists of just a single source file, use the command

mmc filename.m

Unlike traditional Unix compilers, however, `mmc' will put the executable into a file called `filename', not `a.out'.

For programs that consist of more than one source file, we strongly recommend that you use Mmake (see section Using Mmake). Mmake will perform all the steps listed below, using automatic dependency analysis to ensure that things are done in the right order, and that steps are not repeated unnecessarily. If you use Mmake, then you don't need to understand the details of how the Mercury implementation goes about building programs. Thus you may wish to skip the rest of this chapter.

To compile a source file to object code without creating an executable, use the command

mmc -c filename.m

`mmc' will put the object code into a file called `module.o', where module is the name of the Mercury module defined in `filename.m'. It also will leave the intermediate C code in a file called `module.c'. If the source file contains nested modules, then each sub-module will get compiled to separate C and object files.

Before you can compile a module, you must make the interface files for the modules that it imports (directly or indirectly). You can create the interface files for one or more source files using the following commands:

mmc --make-short-int filename1.m filename2.m ...
mmc --make-priv-int filename1.m filename2.m ...
mmc --make-int filename1.m filename2.m ...

If you are going to compile with `--intermodule-optimization' enabled, then you also need to create the optimization interface files.

mmc --make-opt-int filename1.m filename2.m ...

If you are going to compile with `--transitive-intermodule-optimization' enabled, then you also need to create the transitive optimization files.

mmc --make-trans-opt filename1.m filename2.m ...

Given that you have made all the interface files, one way to create an executable for a multi-module program is to compile all the modules at the same time using the command

mmc filename1.m filename2.m ...

This will by default put the resulting executable in `filename1', but you can use the `-o filename' option to specify a different name for the output file, if you so desire.

The other way to create an executable for a multi-module program is to compile each module separately using `mmc -c', and then link the resulting object files together. The linking is a two stage process.

First, you must create and compile an initialization file, which is a C source file containing calls to automatically generated initialization functions contained in the C code of the modules of the program:

c2init module1.c module2.c ... > main-module_init.c,
mgnuc -c main-module_init.c

The `c2init' command line must contain the name of the C file of every module in the program. The order of the arguments is not important. The `mgnuc' command is the Mercury GNU C compiler; it is a shell script that invokes the GNU C compiler `gcc' with the options appropriate for compiling the C programs generated by Mercury.

You then link the object code of each module with the object code of the initialization file to yield the executable:

ml -o main-module module1.o module2.o ... main_module_init.o

`ml', the Mercury linker, is another shell script that invokes a C compiler with options appropriate for Mercury, this time for linking. `ml' also pipes any error messages from the linker through `mdemangle', the Mercury symbol demangler, so that error messages refer to predicate and function names from the Mercury source code rather than to the names used in the intermediate C code.

The above command puts the executable in the file `main-module'. The same command line without the `-o' option would put the executable into the file `a.out'.

`mmc' and `ml' both accept a `-v' (verbose) option. You can use that option to see what is actually going on. For the full set of options of `mmc', see section Invocation.

Running programs

Once you have created an executable for a Mercury program, you can go ahead and execute it. You may however wish to specify certain options to the Mercury runtime system. The Mercury runtime accepts options via the `MERCURY_OPTIONS' environment variable. The most useful of these are the options that set the size of the stacks. (For the full list of available options, see section Environment variables.)

The det stack and the nondet stack are allocated fixed sizes at program start-up. The default size is 512k for the det stack and 128k for the nondet stack, but these can be overridden with the `--detstack-size' and `--nondetstack-size' options, whose arguments are the desired sizes of the det and nondet stacks respectively, in units of kilobytes. On operating systems that provide the appropriate support, the Mercury runtime will ensure that stack overflow is trapped by the virtual memory system.

With conservative garbage collection (the default), the heap will start out with a zero size, and will be dynamically expanded as needed, When not using conservative garbage collection, the heap has a fixed size like the stacks. The default size is 4 Mb, but this can be overridden with the `--heap-size' option.

Using Mmake

Mmake, short for "Mercury Make", is a tool for building Mercury programs that is built on top of ordinary or GNU Make (1). With Mmake, building even a complicated Mercury program consisting of a number of modules is as simple as

mmake main-module.depend
mmake main-module

Mmake only recompiles those files that need to be recompiled, based on automatically generated dependency information. Most of the dependencies are stored in `.d' files that are automatically recomputed every time you recompile, so they are never out-of-date. A little bit of the dependency information is stored in `.dep' and `.dv' files which are more expensive to recompute. The `mmake main-module.depend' command which recreates the `main-module.dep' and `main-module.dv' files needs to be repeated only when you add or remove a module from your program, and there is no danger of getting an inconsistent executable if you forget this step -- instead you will get a compile or link error.

`mmake' allows you to build more than one program in the same directory. Each program must have its own `.dep' and `.dv' files, and therefore you must run `mmake program.depend' for each program.

If there is a file called `Mmake' or `Mmakefile' in the current directory, Mmake will include that file in its automatically-generated Makefile. The `Mmake' file can override the default values of various variables used by Mmake's builtin rules, or it can add additional rules, dependencies, and actions.

Mmake's builtin rules are defined by the file `prefix/lib/mercury/mmake/Mmake.rules' (where prefix is `/usr/local/mercury-version' by default, and version is the version number, e.g. `0.6'), as well as the rules and variables in the automatically-generated `.dep' and `.dv' files. These rules define the following targets:

`main-module.depend'
Creates the files `main-module.dep' and `main-module.dv' from `main-module.m' and the modules it imports. This step must be performed first. It is also required whenever you wish to change the level of inter-module optimization performed (see section Overall optimization options).
`main-module.ints'
Ensure that the interface files for main-module and its imported modules are up-to-date. (If the underlying `make' program does not handle transitive dependencies, this step may be necessary before attempting to make `main-module' or `main-module.check'; if the underlying `make' is GNU Make, this step should not be necessary.)
`main-module.check'
Perform semantic checking on main-module and its imported modules. Error messages are placed in `.err' files.
`main-module'
Compiles and links main-module using the Mercury compiler. Error messages are placed in `.err' files.
`main-module.split'
Compiles and links main-module using the Mercury compiler, with the Mercury compiler's `--split-c-files' option enabled. For more information about `--split-c-files', see section Output-level (LLDS -> C) optimization options.
`libmain-module'
Builds a library whose top-level module is main-module. This will build a static object library, a shared object library (for platforms that support it), and the necessary interface files. For more information, see section Libraries.
`libmain-module.install'
Builds and installs a library whose top-level module is main-module. This target will build and install a static object library and (for platforms that support it) a shared object library, for the default grade and also for the additional grades specified in the LIBGRADES variable. It will also build and install the necessary interface files. For more information, see section Supporting multiple grades and architectures.
`main-module.clean'
Removes the automatically generated files that contain the compiled code of the program and the error messages produced by the compiler. Specifically, this will remove all the `.c', `.s', `.o', `.pic_o', `.prof', `.no', `.ql', and `.err' files belonging to the named main-module or its imported modules. Use this target whenever you wish to change compilation model (see section Compilation model options). This target is also recommended whenever you wish to change the level of inter-module optimization performed (see section Overall optimization options) in addition to the mandatory main-module.depend.
`main-module.realclean'
Removes all the automatically generated files. In addition to the files removed by main-module.clean, this removes the `.int', `.int0', `.int2', `.int3', `.opt', `.trans_opt', `.date', `.date0', `.date3', `.optdate', `.trans_opt_date', `.h' and `.d' files belonging to one of the modules of the program, and also the various possible executables, libraries and dependency files for the program as a whole --- `main-module', `main-module.split', `main-module.nu', `main-module.nu.save', `main-module.nu.debug', `main-module.nu.debug.save', `main-module.sicstus', `main-module.sicstus.debug', `libmain-module.a', `libmain-module.so', `main-module.split.a', `main-module.init', `main-module.dep' and `main-module.dv'.
`clean'
This makes `main-module.clean' for every main-module for which there is a `main-module.dep' file in the current directory, as well as deleting the profiling files `Prof.CallPair', `Prof.Counts', `Prof.Decl', `Prof.MemWords' and `Prof.MemCells'.
`realclean'
This makes `main-module.realclean' for every main-module for which there is a `main-module.dep' file in the current directory, as well as deleting the profiling files as per the `clean' target.

The variables used by the builtin rules (and their default values) are defined in the file `prefix/lib/mercury/mmake/Mmake.vars', however these may be overridden by user `Mmake' files. Some of the more useful variables are:

MAIN_TARGET
The name of the default target to create if `mmake' is invoked with any target explicitly named on the command line.
MC
The executable that invokes the Mercury compiler.
GRADEFLAGS and EXTRA_GRADEFLAGS
Compilation model options (see section Compilation model options) to pass to the Mercury compiler, linker, and other tools (in particular mmc, mgnuc, ml, and c2init).
MCFLAGS and EXTRA_MCFLAGS
Options to pass to the Mercury compiler. (Note that compilation model options should be specified in GRADEFLAGS, not in MCFLAGS.)
MGNUC
The executable that invokes the C compiler.
MGNUCFLAGS and EXTRA_MGNUCFLAGS
Options to pass to the C compiler.
ML
The executable that invokes the linker.
MLFLAGS and EXTRA_MLFLAGS
Options to pass to the linker. (Note that compilation model options should be specified in GRADEFLAGS, not in MLFLAGS.)
MLLIBS and EXTRA_MLLIBS
A list of `-l' options specifying libraries used by the program (or library) that you are building. See section Using libraries.
MLOBJS
A list of extra object files to link into any programs or libraries that you are building.
C2INITFLAGS and EXTRA_C2INITFLAGS
Options to pass to the c2init program. (Note that compilation model options and extra files to be processed by c2init should not be specified in C2INITFLAGS - they should be specified in GRADEFLAGS and C2INITARGS, respectively.)
C2INITARGS and EXTRA_C2INITARGS
Extra files to be processed by c2init. These variables should not be used for specifying flags to c2init (that's what C2INITFLAGS is for) since they are also used to derive extra dependency information.
EXTRA_LIBRARIES
A list of extra Mercury libraries to link into any programs or libraries that you are building. Libraries should be specified using their base name; that is, without any `lib' prefix or extension. For example the library including the files `libfoo.a' and `foo.init' would be referred to as just `foo'.
EXTRA_LIB_DIRS
A list of extra Mercury library directory hierarchies to search when looking for extra libraries.
INSTALL_PREFIX
The path to the root of the directory hierarchy where the libraries, etc. you are building should be installed. The default is to install in the same location as the Mercury compiler being used to do the install.
LIBGRADES
A list of additional grades which should be built when installing libraries. The default is to install the Mercury compiler's default set of grades. Note that this may not be the set of grades in which the standard libraries were actually installed. Note also that any GRADEFLAGS settings will also be applied when the library is built in each of the listed grades, so you may not get what you expect if those options are not subsumed by each of the grades listed.

Other variables also exist - see `prefix/lib/mercury/mmake/Mmake.vars' for a complete list.

If you wish to temporarily change the flags passed to an executable, rather than setting the various `FLAGS' variables directly, you can set an `EXTRA_' variable. This is particularly intended for use where a shell script needs to call mmake and add an extra parameter, without interfering with the flag settings in the `Mmakefile'.

For each of the variables for which there is version with an `EXTRA_' prefix, there is also a version with an `ALL_' prefix that is defined to include both the ordinary and the `EXTRA_' version. If you wish to use the values any of these variables in your Mmakefile (as opposed to setting the values), then you should use the `ALL_' version.

It is also possible to override these variables on a per-file basis. For example, if you have a module called say `bad_style.m' which triggers lots of compiler warnings, and you want to disable the warnings just for that file, but keep them for all the other modules, then you can override MCFLAGS just for that file. This is done by setting the variable `MCFLAGS-bad_style', as shown here:

MCFLAGS-bad_style = --inhibit-warnings

Mmake has a few options, including `--use-subdirs', `--save-makefile', `--verbose', and `--no-warn-undefined-vars'. For details about these options, see the man page or type `mmake --help'.

Finally, since Mmake is built on top of Make or GNU Make, you can also make use of the features and options supported by the underlying Make. In particular, GNU Make has support for running jobs in parallel, which is very useful if you have a machine with more than one CPU.

Libraries

Often you will want to use a particular set of Mercury modules in more than one program. The Mercury implementation includes support for developing libraries, i.e. sets of Mercury modules intended for reuse. It allows separate compilation of libraries and, on many platforms, it supports shared object libraries.

Writing libraries

A Mercury library is identified by a top-level module, which should contain all of the modules in that library as sub-modules. It may be as simple as this `mypackage.m' file:

:- module mypackage.
:- interface.
:- include_module foo, bar, baz.

This defines a module `mypackage' containing sub-modules `mypackage:foo', `mypackage:bar', and `mypackage:baz'.

It is also possible to build libraries of unrelated modules, so long as the top-level module imports all the necessary modules. For example:

:- module blah.
:- import_module fee, fie, foe, fum.

This example defines a module `blah', which has no functionality of its own, and which is just used for grouping the unrelated modules `fee', `fie', `foe', and `fum'.

Generally it is better style for each library to consist of a single module which encapsulates its sub-modules, as in the first example, rather than just a group of unrelated modules, as in the second example.

Building libraries

Generally Mmake will do most of the work of building libraries automatically. Here's a sample Mmakefile for creating a library.

MAIN_TARGET = libmypackage
depend: mypackage.depend

The Mmake target `libfoo' is a built-in target for creating a library whose top-level module is `foo.m'. The automatically generated Make rules for the target `libfoo' will create all the files needed to use the library.

Mmake will create static (non-shared) object libraries and, on most platforms, shared object libraries; however, we do not yet support the creation of dynamic link libraries (DLLs) on Windows. Static libraries are created using the standard tools `ar' and `ranlib'. Shared libraries are created using the `--make-shared-lib' option to `ml'. The automatically-generated Make rules for `libmypackage' will look something like this:

libmypackage: libmypackage.a libmypackage.so \
		$(mypackage.ints) $(mypackage.int3s) \
		$(mypackage.opts) $(mypackage.trans_opts) mypackage.init

libmypackage.a: $(mypackage.os)
	rm -f libmypackage.a
	$(AR) $(ARFLAGS) libmypackage.a $(mypackage.os) $(MLOBJS)
	$(RANLIB) $(RANLIBFLAGS) mypackage.a

libmypackage.so: $(mypackage.pic_os)
	$(ML) $(MLFLAGS) --make-shared-lib -o libmypackage.so \
		$(mypackage.pic_os) $(MLPICOBJS) $(MLLIBS)

libmypackage.init:
	...

clean:
	rm -f libmypackage.a libmypackage.so

If necessary, you can override the default definitions of the variables such as `ML', `MLFLAGS', `MLPICOBJS', and `MLLIBS' to customize the way shared libraries are built. Similarly `AR', `ARFLAGS', `MLOBJS', `RANLIB', and `RANLIBFLAGS' control the way static libraries are built. (The `MLOBJS' variable is supposed to contain a list of additional object files to link into the library, while the `MLLIBS' variable should contain a list of `-l' options naming other libraries used by this library. `MLPICOBJS' is described below.)

Note that to use a library, as well as the shared or static object library, you also need the interface files. That's why the `libmypackage' target builds `$(mypackage.ints)' and `$(mypackage.int3s)'. If the people using the library are going to use intermodule optimization, you will also need the intermodule optimization interfaces. The `libmypackage' target will build `$(mypackage.opts)' if `--intermodule-optimization' is specified in your `MCFLAGS' variable (this is recommended). Similarly, if the people using the library are going to use transitive intermodule optimization, you will also need the transitive intermodule optimization interfaces (`$(mypackage.trans_opt)'). These will be built if `--trans-intermod-opt' is specified in your `MCFLAGS' variable.

In addition, with certain compilation grades, programs will need to execute some startup code to initialize the library; the `mypackage.init' file contains information about initialization code for the library. The `libmypackage' target will build this file.

On some platforms, shared objects must be created using position independent code (PIC), which requires passing some special options to the C compiler. On these platforms, Mmake will create `.pic_o' files, and `$(mypackage.pic_os)' will contain a list of the `.pic_o' files for the library whose top-level module is `mypackage'. In addition, `$(MLPICOBJS)' will be set to `$MLOBJS' with all occurrences of `.o' replaced with `.pic_o'. On other platforms, position independent code is the default, so `$(mypackage.pic_os)' will just be the same as `$(mypackage.os)', which contains a list of the `.o' files for that module, and `$(MLPICOBJS)' will be the same as `$(MLOBJS)'.

Installing libraries

If you want, once you have built a library, you could then install (i.e. copy) the shared object library, the static object library, the interface files (possibly including the optimization interface files and the transitive optimization interface files), and the initialization file into a different directory, or into several different directories, for that matter -- though it is probably easiest for the users of the library if you keep them in a single directory. Or alternatively, you could package them up into a `tar', `shar', or `zip' archive and ship them to the people who will use the library.

Using libraries

To use a library, you need to set the Mmake variables `VPATH', `MCFLAGS', `MLFLAGS', `MLLIBS', and `C2INITARGS' to specify the name and location of the library or libraries that you wish to use. If you are using `--intermodule-optimization', you may also need to set `MGNUCFLAGS' if the library uses the C interface. For example, if you want to link in the libraries `mypackage' and `myotherlib', which were built in the directories `/some/directory/mypackage' and `/some/directory/myotherlib' respectively, you could use the following settings:

# Specify the location of the `mypackage' and `myotherlib' directories
MYPACKAGE_DIR = /some/directory/mypackage
MYOTHERLIB_DIR = /some/directory/myotherlib

# The following stuff tells Mmake to use the two libraries
VPATH = $(MYPACKAGE_DIR):$(MYOTHERLIB_DIR):$(MMAKE_VPATH)
MCFLAGS = -I$(MYPACKAGE_DIR) -I$(MYOTHERLIB_DIR) $(EXTRA_MCFLAGS)
MLFLAGS = -R$(MYPACKAGE_DIR) -R$(MYOTHERLIB_DIR) $(EXTRA_MLFLAGS) \
          -L$(MYPACKAGE_DIR) -L$(MYOTHERLIB_DIR)
MLLIBS = -lmypackage -lmyotherlib $(EXTRA_MLLIBS)
C2INITARGS = $(MYPACKAGE_DIR)/mypackage.init \
             $(MYOTHERLIB_DIR)/myotherlib.init

# This line may be needed if `--intermodule-optimization'
# is in `MCFLAGS'. `-I' options should be added for any other
# directories containing header files that the libraries require.
MGNUCFLAGS = -I$(MYPACKAGE_DIR) -I$(MYOTHERLIB_DIR) $(EXTRA_MGNUCFLAGS)

Here `VPATH' is a colon-separated list of path names specifying directories that Mmake will search for interface files. The `-I' options in `MCFLAGS' tell `mmc' where to find the interface files. The `-R' options in `MLFLAGS' tell the loader where to find shared libraries, and the `-L' options tell the linker where to find libraries. (Note that the `-R' options must precede the `-L' options.) The `-l' options tell the linker which libraries to link with. The extra arguments to `c2init' specified in the `C2INITARGS' variable tell `c2init' where to find the `.init' files for the libraries (so that it can generate appropriate initialization code) as well as telling Mmake that any `_init.c' files generated depend on these files. The `-I' options in `MGNUCFLAGS' tell the C preprocessor where to find the header files for the libraries.

The example above assumes that the static object library, shared object library, interface files and initialization file for each Mercury library being used are all put in a single directory, which is probably the simplest way of organizing things, but the Mercury implementation does not require that.

Supporting multiple grades and architectures

In order to better support using and installing libraries in multiple grades, `mmake' now has support for alternative library directory hierarchies. These have the same structure as the `prefix/lib/mercury' tree, including the different subdirectories for different grades and different machine architectures.

In order to support the installation of a library into such a tree, you simply need to specify (e.g. in your `Mmakefile') the path prefix and the list of grades to install:

INSTALL_PREFIX = /my/install/dir
LIBGRADES = asm_fast asm_fast.gc.tr.debug

This specifies that libraries should be installed in `/my/install/dir/lib/mercury', in the default grade plus `asm_fast' and `asm_fast.gc.tr.debug'. If `INSTALL_PREFIX' is not specified, `mmake' will attempt to install the library in the same place as the standard Mercury libraries. If `LIBGRADES' is not specified, `mmake' will use the Mercury compiler's default set of grades, which may or may not correspond to the actual set of grades in which the standard Mercury libraries were installed.

To actually install a library `libfoo', use the `mmake' target `libfoo.install'. This also installs all the needed interface files, and (if intermodule optimisation is enabled) the relevant intermodule optimisation files.

One can override the list of grades to install for a given library `libfoo' by setting the `LIBGRADES-foo' variable, or add to it by setting `EXTRA_LIBGRADES-foo'.

Note that currently it is not possible to set the installation prefix on a library-by-library basis.

Once a library is installed in such a hierarchy, using it is easy. Suppose the user wishes to use the library `mypackage' (installed in the tree rooted at `/some/directory/mypackage') and the library `myotherlib' (installed in the tree rooted at `/some/directory/myotherlib'). The user need only set the following Mmake variables:

EXTRA_LIB_DIRS = /some/directory/mypackage/lib/mercury \
		/some/directory/myotherlib/lib/mercury
EXTRA_LIBRARIES = mypackage myotherlib

Mmake will then ensure that the appropriate directories are searched for the relevant interface files, module initialisation files, compiled libraries, etc.

One can specify extra libraries to be used on a program-by-program basis. For instance, if the program `foo' also uses the library `mylib4foo', but the other programs governed by the Mmakefile don't, then one can declare:

EXTRA_LIBRARIES-foo = mylib4foo

Debugging

Quick overview

This section gives a quick and simple guide to getting started with the debugger. The remainder of this chapter contains more detailed documentation.

To use the debugger, you must first compile your program with debugging enabled. You can do this by using the `--debug' option to `mmc', or by including `GRADEFLAGS = --debug' in your `Mmakefile'.

bash$ mmc --debug hello.m

Once you've compiled with debugging enabled, you can use the `mdb' command to invoke your program under the debugger:

bash$ mdb ./hello arg1 arg2 ...

Any arguments (such as `arg1 arg2 ...' in this example) that you pass after the program name will be given as arguments to the program.

The debugger will print a start-up message and will then show you the first trace event, namely the call to main/2:

       1:      1  1 CALL pred hello:main/2-0 (det)
                         hello.m:13
mdb>

By hitting enter at the `mdb>' prompt, you can step through the execution of your program to the next trace event:

       2:      2  2 CALL pred io:write_string/3-0 (det)
                         io.m:2837 (hello.m:14)
mdb>
Hello, world
       3:      2  2 EXIT pred io:write_string/3-0 (det)
                         io.m:2837 (hello.m:14)
mdb>

For each trace event, the debugger prints out several pieces of information. The three numbers at the start of the display are the event number, the call sequence number, and the call depth. (You don't really need to pay too much attention to those.) They are followed by the event type (e.g. `CALL' or `EXIT'). After that comes the identification of the procedure in which the event occurred, consisting of the module-qualified name of the predicate or function to which the procedure belongs, followed by its arity, mode number and determinism. This may sometimes be followed by a "path" (see section Tracing of Mercury programs). At the end is the file name and line number of the called procedure and (if available) also the file name and line number of the call.

The most useful mdb commands have single-letter abbreviations. The `alias' command will show these abbreviations:

mdb> alias
?      =>    help
EMPTY  =>    step
NUMBER =>    step
P      =>    print *
b      =>    break
c      =>    continue
d      =>    stack
f      =>    finish
g      =>    goto
h      =>    help
p      =>    print
r      =>    retry
s      =>    step
v      =>    vars

The `P' or `print *' command will display the values of any live variables in scope. The `f' or `finish' command can be used if you want to skip over a call. The `b' or `break' command can be used to set break-points. The `d' or `stack' command will display the call stack. The `quit' command will exit the debugger.

That should be enough to get you started. But if you have GNU Emacs installed, you should strongly consider using the Emacs interface to `mdb' -- see the following section.

For more information about the available commands, use the `?' or `help' command, or see section Debugger commands.

GNU Emacs interface

As well as the command-line debugger, mdb, there is also an Emacs interface to this debugger. Note that the Emacs interface only works with GNU Emacs, not with XEmacs.

With the Emacs interface, the debugger will display your source code as you trace through it, marking the line that is currently being executed, and allowing you to easily set breakpoints on particular lines in your source code. You can have separate windows for the debugger prompt, the source code being executed, and for the output of the program being executed. In addition, most of the mdb commands are accessible via menus.

To start the Emacs interface, you first need to put the following text in the file `.emacs' in your home directory, replacing "/usr/local/mercury-1.0" with the directory that your Mercury implementation was installed in.

(setq load-path (cons (expand-file-name
  "/usr/local/mercury-1.0/lib/mercury/elisp")
  load-path))
(autoload 'mdb "gud" "Invoke the Mercury debugger" t)

Build your program with debugging enabled, as described in section Quick overview or section Preparing a program for debugging. Then start up Emacs, e.g. using the command `emacs', and type M-x mdb RET. Emacs will then prompt you for the mdb command to invoke

Run mdb (like this): mdb 

and you should type in the name of the program that you want to debug and any arguments that you want to pass to it:

Run mdb (like this): mdb ./hello arg1 arg2 ...

Emacs will then create several "buffers": one for the debugger prompt, one for the input and output of the program being executed, and one or more for the source files. By default, Emacs will split the display into two parts, called "windows", so that two of these buffers will be visible. You can use the command C-x o to switch between windows, and you can use the command C-x 2 to split a window into two windows. You can use the "Buffers" menu to select which buffer is displayed in each window.

If you're using X-Windows, then it is a good idea to set the Emacs variable `pop-up-frames' to `t' before starting mdb, since this will cause each buffer to be displayed in a new "frame" (i.e. a new X window). You can set this variable interactively using the `set-variable' command, i.e. M-x set-variable RET pop-up-frames RET t RET. Or you can put `(setq pop-up-frames t)' in the `.emacs' file in your home directory.

For more information on buffers, windows, and frames, see the Emacs documentation.

Another useful Emacs variable is `gud-mdb-directories'. This specifies the list of directories to search for source files. You can use a command such as

M-x set-variable RET
gud-mdb-directories RET
(list "/foo/bar" "../other" "/home/guest") RET

to set it interactively, or you can put a command like

(setq gud-mdb-directories
  (list "/foo/bar" "../other" "/home/guest"))

in your `.emacs' file.

At each trace event, the debugger will search for the source file corresponding to that event, first in the same directory as the program, and then in the directories specified by the `gud-mdb-directories' variable. It will display the source file, with the line number corresponding to that trace event marked by an arrow (`=>') at the start of the line.

Several of the debugger features can be accessed by moving the cursor to the relevant part of the source code and then selecting a command from the menu. You can set a break point on a line by moving the cursor to the appropriate line in your source code (e.g. with the arrow keys, or by clicking the mouse there), and then selecting the "Set breakpoint on line" command from the "Breakpoints" sub-menu of the "MDB" menu. You can set a breakpoint on a procedure by moving the cursor over the procedure name and then selecting the "Set breakpoint on procedure" command from the same menu. And you can display the value of a variable by moving the cursor over the variable name and then selecting the "Print variable" command from the "Data browsing" sub-menu of the "MDB" menu. Most of the menu commands also have keyboard short-cuts, which are displayed on the menu.

Note that mdb's `context' command should not be used if you are using the Emacs interface, otherwise the Emacs interface won't be able to parse the file names and line numbers that mdb outputs, and so it won't be able to highlight the correct location in the source code.

Tracing of Mercury programs

The Mercury debugger is based on a modified version of the box model on which the four-port debuggers of most Prolog systems are based. Such debuggers abstract the execution of a program into a sequence, also called a trace, of execution events of various kinds. The four kinds of events supported by most Prolog systems (their ports) are

call
A call event occurs just after a procedure has been called, and control has just reached the start of the body of the procedure.
exit
An exit event occurs when a procedure call has succeeded, and control is about to return to its caller.
redo
A redo event occurs when all computations to the right of a procedure call have failed, and control is about to return to this call to try to find alternative solutions.
fail
A fail event occurs when a procedure call has run out of alternatives, and control is about to return to the rightmost computation to its left that still has possibly successful alternatives left.

Mercury also supports these four kinds of events, but not all events can occur for every procedure call. Which events can occur for a procedure call, and in what order, depend on the determinism of the procedure. The possible event sequences for procedures of the various determinisms are as follows.

nondet procedures
a call event, zero or more repeats of (exit event, redo event), and a fail event
multi procedures
a call event, one or more repeats of (exit event, redo event), and a fail event
semidet and cc_nondet procedures
a call event, and either an exit event or a fail event
det and cc_multi procedures
a call event and an exit event
failure procedures
a call event and a fail event
erroneous procedures
a call event

Besides the event types call, exit, redo and fail, which describe the interface of a call, Mercury also supports several types of events that report on what is happening internal to a call. Each of these internal event types has an associated parameter called a path. The internal event types are:

then
A then event occurs when execution reaches the start of the then part of an if-then-else. The path associated with the event specifies which if-then-else this is.
else
An else event occurs when execution reaches the start of the else part of an if-then-else. The path associated with the event specifies which if-then-else this is.
disj
A disj event occurs when execution reaches the start of a disjunct in a disjunction. The path associated with the event specifies which disjunct of which disjunction this is.
switch
A switch event occurs when execution reaches the start of one arm of a switch (a disjunction in which each disjunct unifies a bound variable with different function symbol). The path associated with the event specifies which arm of which switch this is.

A path is a sequence of path components separated by semicolons. Each path component is one of the following:

cnum
The num'th conjunct of a conjunction.
dnum
The num'th disjunct of a disjunction.
snum
The num'th arm of a switch.
?
The condition of an if-then-else.
t
The then part of an if-then-else.
e
The else part of an if-then-else.
~
The goal inside a negation.
q
The goal inside an existential quantification.

A path describes the position of a goal inside the body of a procedure definition. For example, if the procedure body is a disjunction in which each disjunct is a conjunction, then the path `d2;c3;' denotes the third conjunct within the second disjunct. If the third conjunct within the second disjunct is an atomic goal such as a call or a unification, then this will be the only goal with whose path has `d2;c3;' as a prefix. If it is a compound goal, then its components will all have paths that have `d2;c3;' as a prefix, e.g. if it is an if-then-else, then its three components will have the paths `d2;c3;?;', `d2;c3;t;' and `d2;c3;e;'.

Paths refer to the internal form of the procedure definition. When debugging is enabled (and the option --trace-optimized is not given), the compiler will try to keep this form as close as possible to the source form of the procedure, in order to make event paths as useful as possible to the programmer. Due to the compiler's flattening of terms, and its introduction of extra unifications to implement calls in implied modes, the number of conjuncts in a conjunction will frequently differ between the source and internal form of a procedure. This is rarely a problem, however, as long as you know about it. Mode reordering can be a bit more of a problem, but it can be avoided by writing single-mode predicates and functions so that producers come before consumers. The compiler transformation that potentially causes the most trouble in the interpretation of goal paths is the conversion of disjunctions into switches. In most cases, a disjunction is transformed into a single switch, and it is usually easy to guess, just from the events within a switch arm, just which disjunct the switch arm corresponds to. Some cases are more complex; for example, it is possible for a single disjunction can be transformed into several switches, possibly with other, smaller disjunctions inside them. In such cases, making sense of goal paths may require a look at the internal form of the procedure. You can ask the compiler to generate a file with the internal forms of the procedures in a given module by including the options `-dfinal -Dpaths' on the command line when compiling that module.

Preparing a program for debugging

When you compile a Mercury program, you can specify whether you want to be able to run the Mercury debugger on the program or not. If you do, the compiler embeds calls to the Mercury debugging system into the executable code of the program, at the execution points that represent trace events. At each event, the debugging system decides whether to give control back to the executable immediately, or whether to first give control to you, allowing you to examine the state of the computation and issue commands.

Mercury supports two broad ways of preparing a program for debugging. The simpler way is to compile a program in a debugging grade, which you can do directly by specifying a grade that includes the word "debug" (e.g. `asm_fast.gc.debug'), or indirectly by specifying the `--debug' grade option to the compiler, linker, and other tools (in particular mmc, mgnuc, ml, and c2init). If you follow this way, and accept the default settings of the various compiler options that control the selection of trace events (which are described below), you will be assured of being able to get control at every execution point that represents a potential trace event, which is very convenient.

The two drawbacks of using a debugging grade are the large size of the resulting executables, and the fact that often you discover that you need to debug a big program only after having built it in a non-debugging grade. This is why Mercury also supports another way to prepare a program for debugging, one that does not require the use of a debugging grade. With this way, you can decide, individually for each module, which of three trace levels, `none', `shallow' and `deep', you want to compile them with:

`none'
A procedure compiled with trace level `none' will never generate any events.
`deep'
A procedure compiled with trace level `deep' will always generate all the events requested by the user. By default, this is all possible events, but you can tell the compiler that you are not interested in some kinds of events via compiler options (see below).
`shallow'
A procedure compiled with trace level `shallow' will generate interface events if it is called from a procedure compiled with trace level `deep', but it will never generate any internal events, and it will not generate any interface events either if it is called from a procedure compiled with trace level `shallow'. If it is called from a procedure compiled with trace level `none', the way it will behave is dictated by whether its nearest ancestor whose trace level is not `none' has trace level `deep' or `shallow'.

The intended uses of these trace levels are as follows.

`deep'
You should compile a module with trace level `deep' if you suspect there may be a bug in the module, or if you think that being able to examine what happens inside that module can help you locate a bug.
`shallow'
You should compile a module with trace level `shallow' if you believe the code of the module is reliable and unlikely to have bugs, but you still want to be able to get control at calls to and returns from any predicates and functions defined in the module, and if you want to be able to see the arguments of those calls.
`none'
You should compile a module with trace level `none' only if you are reasonably confident that the module is reliable, and if you believe that knowing what calls other modules make to this module would not significantly benefit you in your debugging.

In general, it is a good idea for most or all modules that can be called from modules compiled with trace level `deep' to be compiled with at least trace level `shallow'.

You can control what trace level a module is compiled with by giving one of the following compiler options:

`--trace shallow'
This always sets the trace level to `shallow'.
`--trace deep'
This always sets the trace level to `deep'.
`--trace minimum'
In debugging grades, this sets the trace level to `shallow'; in non-debugging grades, it sets the trace level to `none'.
`--trace default'
In debugging grades, this sets the trace level to `deep'; in non-debugging grades, it sets the trace level to `none'.

As the name implies, the fourth alternative is the default, which is why by default you get no debugging capability in non-debugging grades and full debugging capability in debugging grades. The table also shows that in a debugging grade, no module can be compiled with trace level `none'.

Important note: If you are not using a debugging grade, but you compile some modules with `--trace shallow' or `--trace deep', then you must also pass the `--trace' (or `-t') option to c2init and to the Mercury linker. If you're using Mmake, then you can do this by including `--trace' in the `C2INITFLAGS' and `MLFLAGS' variables.

If you're using Mmake, then you can also set the compilation options for a single module named Module by setting the Mmake variable `MCFLAGS-Module'. For example, to compile the file `foo.m' with deep tracing, `bar.m' with shallow tracing, and everything else with no tracing, you could use the following:

C2INITFLAGS = --trace
MLFLAGS     = --trace
MCFLAGS-foo = --trace deep
MCFLAGS-bar = --trace shallow

Selecting trace events

In preparing a Mercury program for debugging, the Mercury compiler provides two options that you can use to say that you are not interested in certain kinds of events, and that the compiler should therefore not generate code for those events. This makes the executable smaller and faster.

The first of these options is `--no-trace-internal'. If you specify this when you compile a module, you will not get any internal events from predicates and functions defined in that module, even if the trace level is `deep'; you will only get external events. These are sufficient to tell you what a predicate or function does, i.e. what outputs it computes from what inputs. They do not tell you how it computed those outputs, i.e. what path control took through the predicate or function, but that is sometimes of no particular interest. In any case, much of the time you can deduce the path from the events that result from calls made by the predicate or function in question.

The second of these options is `--no-trace-redo', which can be specified independently of `--no-trace-internal'. If you specify this when you compile a module, you will not get any redo events from predicates and functions defined in that module. If you are not interested in how backtracking arrives at a program point where forward execution can resume after a failure, this is an entirely reasonable thing to do. In any case, with sufficient thought and a memory of previous events you can reconstruct the sequence of redo events that would normally be present between the fail event and the event that represents the resumption of forward execution. This sequence has a redo event for every call to a procedure that can succeed more than once that occurred after the call to the procedure in which the resumption event occurs, provided that that call has not failed yet, and in reverse chronological order.

Normally, when it compiles a module with a trace level other than `none', the compiler will include in the module's object file information about all the call return sites in that module. This information allows the debugger to print stack dumps, as well as the values of variables in ancestors of current call. However, if you specify the `--no-trace-return' option, the compiler will not include this information in the object file, reducing its size but losing the above functionality.

By default, all trace levels other than `none' turn off all compiler optimizations that can affect the sequence of trace events generated by the program, such as inlining. If you are specifically interested in how the compiler's optimizations affect the trace event sequence, you can specify the option `--trace-optimized', which tells the compiler that it does not have to disable those optimizations. (A small number of low-level optimizations have not yet been enhanced to work properly in the presence of tracing, so compiler disables these even if `--trace-optimized' is given.)

Mercury debugger invocation

The executables of Mercury programs by default do not invoke the Mercury debugger even if some or all of their modules were compiled with some form of tracing, and even if the grade of the executable is a debugging grade, This is similar to the behaviour of executables created by the implementations of other languages; for example the executable of a C program compiled with `-g' does not automatically invoke gdb or dbx etc when it is executed.

Unlike those other language implementations, when you invoke the Mercury debugger `mdb', you invoke it not just with the name of an executable but with the command line you want to debug. If something goes wrong when you execute the command

prog arg1 arg2 ...

and you want to find the cause of the problem, you must execute the command

mdb prog arg1 arg2 ...

because you do not get a chance to specify the command line of the program later.

When the debugger starts up, as part of its initialization it executes commands from the following three sources, in order:

  1. The file named by the `MERCURY_DEBUGGER_INIT' environment variable. Usually, `mdb' sets this variable to point to a file that provides documentation for all the debugger commands and defines a small set of aliases. However, if `MERCURY_DEBUGGER_INIT' is already defined when `mdb' is invoked, it will leave its value unchanged. You can use this override ability to provide alternate documentation. If the file named by `MERCURY_DEBUGGER_INIT' cannot be read, `mdb' will print a warning, since in that case, that usual online documentation will not be available.
  2. The file named `.mdbrc' in your home directory. You can put your usual aliases and settings here.
  3. The file named `.mdbrc' in the current working directory. You can put program-specific aliases and settings here.

Mercury debugger concepts

The operation of the Mercury debugger `mdb' is based on the following concepts.

break points
The user may associate a break point with some events that occur inside a procedure; the invocation condition of the break point says which events these are. The four possible invocation conditions are:

The effect of a break point depends on the state of the break point.

Neither of these will happen if the break point is disabled.

strict commands
When a debugger command steps over some events without user interaction at those events, the strictness of the command controls whether the debugger will stop execution and resume user interaction at events to which a break point with state `stop' applies. By default, the debugger will stop at such events. However, if the debugger is executing a strict command, it will not stop at an event just because a break point in the stop state applies to it.

print level
When a debugger command steps over some events without user interaction at those events, the print level controls under what circumstances the stepped over events will be printed.

Regardless of the print level, the debugger will print any event that causes execution to stop and user interaction to start.

default print level
The debugger maintains a default print level. The initial value of this variable is `some', but this value can be overridden by the user.

current environment
Whenever execution stops at an event, the current environment is reset to refer to the stack frame of the call specified by the event. However, the `up', `down' and `level' commands can set the current environment to refer to one of the ancestors of the current call. This will then be the current environment until another of these commands changes the environment yet again or execution continues to another event.

procedure specification
Some debugger commands, e.g. `break', require a parameter that specifies a procedure. Such a procedure specification has the following components in the following order:

Debugger commands

When the debugger (as opposed to the program being debugged) is interacting with the user, the debugger prints a prompt and reads in a line of text, which it will interpret as its next command. Each command line consists of several words separated by white space. The first word is the name of the command, while any other words give options and/or parameters to the command.

Some commands take a number as their first parameter. For such commands, users can type `number command' as well as `command number'. The debugger will treat the former as the latter, even if the number and the command are not separated by white space.

Interactive query commands

query module1 module2 ...
cc_query module1 module2 ...
io_query module1 module2 ...
These commands allow you to type in queries (goals) interactively in the debugger. When you use one of these commands, the debugger will respond with a query prompt (`?-' or `run <--'), at which you can type in a goal; the debugger will the compile and execute the goal and display the answer(s). You can return from the query prompt to the `mdb>' prompt by typing the end-of-file indicator (typically control-D or control-Z), or by typing `quit.'.

The module names module1, module2, ... specify which modules will be imported. Note that you can also add new modules to the list of imports directly at the query prompt, by using a command of the form `[module]', e.g. `[int]'. You need to import all the modules that define symbols used in your query. Queries can only use symbols that are exported from a module; entities which are declared in a module's implementation section only cannot be used.

The three variants differ in what kind of goals they allow. For goals which perform I/O, you need to use `io_query'; this lets you type in the goal using DCG syntax. For goals which don't do I/O, but which have determinism `cc_nondet' or `cc_multi', you need to use `cc_query'; this finds only one solution to the specified goal. For all other goals, you can use plain `query', which finds all the solutions to the goal.

For `query' and `cc_query', the debugger will print out all the variables in the goal using `io__write'. The goal must bind all of its variables to ground terms, otherwise you will get a mode error.

The current implementation works by compiling the queries on-the-fly and then dynamically linking them into the program being debugged. Thus it may take a little while for your query to be executed. Each query will be written to a file named `query.m' in the current directory, so make sure you don't name your source file `query.m'. Note that dynamic linking may not be supported on some systems; if you are using a system for which dynamic linking is not supported, you will get an error message when you try to run these commands.

You may also need to build your program using shared libraries for interactive queries to work. With Linux on the Intel x86 architecture, the default is for executables to be statically linked, which means that dynamic linking won't work, and hence interactive queries won't work either (the error message is rather obscure: the dynamic linker complains about the symbol `__data_start' being undefined). To build with shared libraries, you can use `MGNUCFLAGS=--pic-reg' and `MLFLAGS=--shared' in your Mmakefile. See the `README.Linux' file in the Mercury distribution for more details.

Forward movement commands

step [-NSans] [num]
Steps forward num events. If this command is given at event cur, continues execution until event cur + num. The default value of num is 1.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is not strict, and it uses the default print level.

A command line containing only a number num is interpreted as if it were `step num'.

An empty command line is interpreted as `step 1'.

goto [-NSans] num
Continues execution until the program reaches event number num. If the current event number is larger than num, it reports an error.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is strict, and it uses the default print level.

finish [-NSans] [num]
Continues execution until it reaches a final (EXIT or FAIL) port of the num'th ancestor of the call to which the current event refers. The default value of num is zero, which means skipping to the end of the current call. Reports an error if execution is already at the desired port.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is strict, and it uses the default print level.

return [-NSans]
Continues the program until the program finished returning, i.e. until it reaches a port other than EXIT. Reports an error if the current event already refers to such a port.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is strict, and it uses the default print level.

forward [-NSans]
Continues the program until the program resumes forward execution, i.e. until it reaches a port other than REDO or FAIL. Reports an error if the current event already refers to such a port.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is strict, and it uses the default print level.

mindepth [-NSans] depth
Continues the program until the program reaches an event whose depth is at least depth. Reports an error if the current event already refers to such a port.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is strict, and it uses the default print level.

maxdepth [-NSans] depth
Continues the program until the program reaches an event whose depth is at most depth. Reports an error if the current event already refers to such a port.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is strict, and it uses the default print level.

continue [-NSans]
Continues execution until it reaches the end of the program.

The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.

By default, this command is not strict. The print level used by the command by default depends on the final strictness level: if the command is strict, it is `none', otherwise it is `some'.

Backward movement commands

retry
Restarts execution at the call port of the call corresponding to the current event.

The command will report an error unless the values of all the input arguments are available at the current port. (The compiler will keep the values of the input arguments of traced predicates as long as possible, but it cannot keep them beyond the point where they are destructively updated.)

The debugger can perform a retry only from an exit or fail port; only at these ports does the debugger have enough information to figure out how to reset the stacks. If the debugger is not at such a port when a retry command is given, the debugger will continue forward execution until it reaches an exit or fail port of the call to be retried before it performs the retry. This may require a noticeable amount of time.

retry num
Restarts execution at the call port of the call corresponding to the num'th ancestor of the call to which the current event belongs. For example, if num is 1, it restarts the parent of the current call.

Browsing commands

vars
Prints the names of all the known variables in the current environment, together with an ordinal number for each variable.

print name
print num
Prints the value of the variable in the current environment with the given name, or with the given ordinal number. This is a non-interactive version of the `browse' command (see below). Various settings which affect the way that terms are printed out (including e.g. the maximum term depth) can be set using the `set' command in the browser.

print *
Prints the values of all the known variables in the current environment.

browse name
browse num
Invokes an interactive term browser to browse the value of the variable in the current environment with the given ordinal number or with the given name.

The interactive term browser allows you to selectively examine particular subterms. The depth and size of printed terms may be controlled. The displayed terms may also be clipped to fit within a single screen.

For further documentation on the interactive term browser, invoke the `browse' command from within `mdb' and then type `help' at the `browser>' prompt.

stack [-d]
Prints the names of the ancestors of the call specified by the current event. If two or more ancestor calls are for the same procedure, the procedure identification will be printed once with the appropriate multiplicity annotation.

The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.

This command will report an error if there is no stack trace information available about any ancestor.

up [-d] [num]
Sets the current environment to the stack frame of the num'th level ancestor of the current environment (the immediate caller is the first-level ancestor).

If num is not specified, the default value is one.

This command will report an error if the current environment doesn't have the required number of ancestors, or if there is no execution trace information about the requested ancestor, or if there is no stack trace information about any of the ancestors between the current environment and the requested ancestor.

The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.

down [-d] [num]
Sets the current environment to the stack frame of the num'th level descendant of the current environment (the procedure called by the current environment is the first-level descendant).

If num is not specified, the default value is one.

This command will report an error if there is no execution trace information about the requested descendant.

The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.

level [-d] [num]
Sets the current environment to the stack frame of the num'th level ancestor of the call to which the current event belongs. The zero'th ancestor is the call of the event itself.

This command will report an error if the current environment doesn't have the required number of ancestors, or if there is no execution trace information about the requested ancestor, or if there is no stack trace information about any of the ancestors between the current environment and the requested ancestor.

The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.

current
Prints the current event. This is useful if the details of the event, which were printed when control arrived at the event, have since scrolled off the screen.

Breakpoint commands

break [-PS] filename:linenumber
Puts a break point on the specified line of the specified source file, if there is an event or a call at that position. If the filename is omitted, it defaults to the filename from the context of the current event.

The options `-P' or `--print', and `-S' or `--stop' specify the action to be taken at the break point.

By default, the initial state of the break point is `stop'.

break [-PSaei] proc-spec
Puts a break point on the specified procedure.

The options `-P' or `--print', and `-S' or `--stop' specify the action to be taken at the break point, while the options `-a' or `--all', `-e' or `--entry', and `-i' or `--interface' specify the invocation conditions of the break point.

By default, the action of the break point is `stop', and its invocation condition is `interface'.

break [-PS] here
Puts a break point on the procedure referred to by the current event, with the invocation condition being the event at the current location in the procedure body.

The options `-P' or `--print', and `-S' or `--stop' specify the action to be taken at the break point.

By default, the initial state of the break point is `stop'.

break info
Lists the details and status of all break points.

disable num
Disables the break point with the given number. Reports an error if there is no break point with that number.

disable *
Disables all break points.

enable num
Enables the break point with the given number. Reports an error if there is no break point with that number.

enable *
Enables all break points.

delete num
Deletes the break point with the given number. Reports an error if there is no break point with that number.

delete *
Deletes all break points.

modules
Lists all the debuggable modules (i.e. modules that have debugging information).

procedures module
Lists all the procedures in the debuggable module module.

register
Registers all debuggable modules with the debugger. Has no effect if this registration has already been done. The debugger will perform this registration when creating breakpoints and when listing debuggable modules and/or procedures.

Parameter commands

mmc_options option1 option2 ...
This command sets the options that will be passed to `mmc' to compile your query when you use one of the query commands: `query', `cc_query', or `io_query'. For example, if a query results in a compile error, it may sometimes be helpful to use `mmc_options --verbose-error-messages'.

printlevel none
Sets the default print level to `none'.

printlevel some
Sets the default print level to `some'.

printlevel all
Sets the default print level to `all'.

printlevel
Reports the current default print level.

echo on
Turns on the echoing of commands.

echo off
Turns off the echoing of commands.

echo
Reports whether commands are being echoed or not.

scroll on
Turns on user control over the scrolling of sequences of event reports. This means that every screenful of event reports will be followed by a `--more--' prompt. You may type an empty line, which allows the debugger to continue to print the next screenful of event reports. By typing a line that starts with `a', `s' or `n', you can override the print level of the current command, setting it to `all', `some' or `none' respectively. By typing a line that starts with `q', you can abort the current debugger command and get back control at the next event.

scroll off
Turns off user control over the scrolling of sequences of event reports.

scroll size
Sets the scroll window size to size, which tells scroll control to stop and print a `--more--' prompt after every size - 1 events. The default value of size is the value of the `LINES' environment variable, which should correspond to the number of lines available on the terminal.

scroll
Reports whether user scroll control is enabled and what the window size is.

context none
When reporting events or ancestor levels, does not print contexts (filename/line number pairs).

context before
When reporting events or ancestor levels, prints contexts (filename/line number pairs) before the identification of the event or call to which they refer, on the same line. With long fully qualified predicate and function names, this may make the line wrap around.

context after
When reporting events or ancestor levels, prints contexts (filename/line number pairs) after the identification of the event or call to which they refer, on the same line. With long fully qualified predicate and function names, this may make the line wrap around.

context prevline
When reporting events or ancestor levels, prints contexts (filename/line number pairs) on a separate line before the identification of the event or call to which they refer.

context nextline
When reporting events or ancestor levels, prints contexts (filename/line number pairs) on a separate line after the identification of the event or call to which they refer.

context
Reports where contexts are being printed.

alias name command [command-parameter ...]
Introduces name as an alias for the given command with the given parameters. Whenever a command line has name as its first word, the debugger will substitute the given command and parameters for this word before executing the command line.

If command is the upper-case word `EMPTY', the debugger will substitute the given command and parameters whenever the user types in an empty command line.

If command is the upper-case word `NUMBER', the debugger will insert the given command and parameters before the command line whenever the user types in a command line that consists of a single number.

unalias name
Removes any existing alias for name.

Help commands

document_category slot category
Create a new category of help items, named category. The summary text for the category is given by the lines following this command, up to but not including a line containing only the lower-case word `end'. The list of category summaries printed in response to the command `help' is ordered on the integer slot numbers of the categories involved.

document category slot item
Create a new help item named item in the help category category. The text for the help item is given by the lines following this command, up to but not including a line containing only the lower-case word `end'. The list of items printed in response to the command `help category' is ordered on the integer slot numbers of the items involved.

help category item
Prints help text about the item item in category category.

help word
Prints help text about word, which may be the name of a help category or a help item.

help
Prints summary information about all the available help categories.

Experimental commands

histogram_all filename
Prints (to file filename) a histogram that counts all events at various depths since the start of the program. This histogram is available only in some experimental versions of the Mercury runtime system.

histogram_exp filename
Prints (to file filename) a histogram that counts all events at various depths since the start of the program or since the histogram was last cleared. This histogram is available only in some experimental versions of the Mercury runtime system.

clear_histogram
Clears the histogram printed by `histogram_exp', i.e. sets the counts for all depths to zero.

Developer commands

The following commands are intended for use by the developers of the Mercury implementation.

nondet_stack
Prints the contents of the fixed slots of the frames on the nondet stack.

stack_regs
Prints the contents of the virtual machine registers that point to the det and nondet stacks.

all_regs
Prints the contents of all the virtual machine registers.

Miscellaneous commands

source filename
Executes the commands in the file named filename.

quit [-y]
Quits the debugger and aborts the execution of the program. If the option `-y' is not present, asks for confirmation first. Any answer starting with `y', or end-of-file, is considered confirmation.

End-of-file on the debugger's input is considered a quit command.

Profiling

Introduction

The Mercury profiler `mprof' is a tool which can be used to analyze a Mercury program's performance, so that the programmer can determine which predicates or functions are taking up a disproportionate amount of the execution time.

To obtain the best trade-off between productivity and efficiency, programmers should not spend too much time optimizing their code until they know which parts of the code are really taking up most of the time. Only once the code has been profiled should the programmer consider making optimizations that would improve efficiency at the expense of readability or ease of maintenance.

A good profiler is a tool that should be part of every software engineer's toolkit.

Building profiled applications

To enable profiling, your program must be built with profiling enabled. This can be done by passing the `-p' (`--profiling') option to `mmc' (and also to `mgnuc' and `ml', if you invoke them separately). If you are using Mmake, then you can do this by setting the `GRADEFLAGS' variable in your Mmakefile, e.g. by adding the line `GRADEFLAGS=--profiling'. For more information about the different grades, see section Compilation model options.

Enabling profiling has several effects. Firstly, it causes the compiler to generate slightly modified code which counts the number of times each predicate or function is called, and for every call, records the caller and callee. Secondly, your program will be linked with versions of the library and runtime that were compiled with profiling enabled. (It also has the effect for each source file the compiler generates the static call graph for that file in `module.prof'.)

Time profiling methods

You can control whether profiling measures real (elapsed) time, user time plus system time, or user time only, by including the options `-Tr', `-Tp', or `-Tv' respectively in the environment variable MERCURY_OPTIONS when you run the program to be profiled. Currently, the `-Tp' and `-Tv' options don't work on Windows, so on Windows you must explicitly specify `-Tr'.

The default is user time plus system time, which counts all time spent executing the process, including time spent by the operating system performing working on behalf of the process, but not including time that the process was suspended (e.g. due to time slicing, or while waiting for input). When measuring real time, profiling counts even periods during which the process was suspended. When measuring user time only, profiling does not count time inside the operating system at all.

Creating the profile

The next step is to run your program. The profiling version of your program will collect profiling information during execution, and save this information in the files `Prof.Counts', `Prof.Decls', and `Prof.CallPair'. (`Prof.Decl' contains the names of the procedures and their associated addresses, `Prof.CallPair' records the number of times each procedure was called by each different caller, and `Prof.Counts' records the number of times that execution was in each procedure when a profiling interrupt occurred.)

It is also possible to combine profiling results from multiple runs of your program. You can do by running your program several times, and typing `mprof_merge_counts' after each run.

Due to a known timing-related bug in our code, you may occasionally get segmentation violations when running your program with time profiling enabled. If this happens, just run it again -- the problem occurs only very rarely.

Displaying the profile

To display the profile, just type `mprof'. This will read the `Prof.*' files and display the flat profile in a nice human-readable format. If you also want to see the call graph profile, which takes a lot longer to generate, type `mprof -c'.

Note that `mprof' can take quite a while to execute, and will usually produce quite a lot of output, so you will usually want to redirect the output into a file with a command such as `mprof > mprof.out'.

Analysis of results

The profile output consists of three major sections. These are named the call graph profile, the flat profile and the alphabetic listing.

The call graph profile presents the local call graph of each procedure. For each procedure it shows the parents (callers) and children (callees) of that procedure, and shows the execution time and call counts for each parent and child. It is sorted on the total amount of time spent in the procedure and all of its descendents (i.e. all of the procedures that it calls, directly or indirectly.)

The flat profile presents the just execution time spent in each procedure. It does not count the time spent in descendents of a procedure.

The alphabetic listing just lists the procedures in alphabetical order, along with their index number in the call graph profile, so that you can quickly find the entry for a particular procedure in the call graph profile.

The profiler works by interrupting the program at frequent intervals, and each time recording the currently active procedure and its caller. It uses these counts to determine the proportion of the total time spent in each procedure. This means that the figures calculated for these times are only a statistical approximation to the real values, and so they should be treated with some caution.

The time spent in a procedure and its descendents is calculated by propagating the times up the call graph, assuming that each call to a procedure from a particular caller takes the same amount of time. This assumption is usually reasonable, but again the results should be treated with caution.

Note that any time spent in a C function (e.g. time spent in `GC_malloc()', which does memory allocation and garbage collection) is credited to the Mercury procedure that called that C function.

Here is a small portion of the call graph profile from an example program.

                                  called/total       parents
index  %time    self descendents  called+self    name           index
                                  called/total       children

                                                     <spontaneous>
[1]    100.0    0.00        0.75       0         call_engine_label [1]
                0.00        0.75       1/1           do_interpreter [3]

-----------------------------------------------

                0.00        0.75       1/1           do_interpreter [3]
[2]    100.0    0.00        0.75       1         io__run/0(0) [2]
                0.00        0.00       1/1           io__init_state/2(0) [11]
                0.00        0.74       1/1           main/2(0) [4]

-----------------------------------------------

                0.00        0.75       1/1           call_engine_label [1]
[3]    100.0    0.00        0.75       1         do_interpreter [3]
                0.00        0.75       1/1           io__run/0(0) [2]

-----------------------------------------------

                0.00        0.74       1/1           io__run/0(0) [2]
[4]     99.9    0.00        0.74       1         main/2(0) [4]
                0.00        0.74       1/1           sort/2(0) [5]
                0.00        0.00       1/1           print_list/3(0) [16]
                0.00        0.00       1/10          io__write_string/3(0) [18]

-----------------------------------------------

                0.00        0.74       1/1           main/2(0) [4]
[5]     99.9    0.00        0.74       1         sort/2(0) [5]
                0.05        0.65       1/1           list__perm/2(0) [6]
                0.00        0.09   40320/40320       sorted/1(0) [10]

-----------------------------------------------

                                       8             list__perm/2(0) [6]
                0.05        0.65       1/1           sort/2(0) [5]
[6]     86.6    0.05        0.65       1+8      list__perm/2(0) [6]
                0.00        0.60    5914/5914        list__insert/3(2) [7]
                                       8             list__perm/2(0) [6]

-----------------------------------------------

                0.00        0.60    5914/5914        list__perm/2(0) [6]
[7]     80.0    0.00        0.60    5914         list__insert/3(2) [7]
                0.60        0.60    5914/5914        list__delete/3(3) [8]

-----------------------------------------------

                                   40319             list__delete/3(3) [8]
                0.60        0.60    5914/5914        list__insert/3(2) [7]
[8]     80.0    0.60        0.60    5914+40319  list__delete/3(3) [8]
                                   40319             list__delete/3(3) [8]

-----------------------------------------------

                0.00        0.00       3/69283       tree234__set/4(0) [15]
                0.09        0.09   69280/69283       sorted/1(0) [10]
[9]     13.3    0.10        0.10   69283         compare/3(0) [9]
                0.00        0.00       3/3           __Compare___io__stream/0(0) [20]
                0.00        0.00   69280/69280       builtin_compare_int/3(0) [27]

-----------------------------------------------

                0.00        0.09   40320/40320       sort/2(0) [5]
[10]    13.3    0.00        0.09   40320         sorted/1(0) [10]
                0.09        0.09   69280/69283       compare/3(0) [9]

-----------------------------------------------

The first entry is `call_engine_label' and its parent is `<spontaneous>', meaning that it is the root of the call graph. (The first three entries, `call_engine_label', `do_interpreter', and `io__run/0' are all part of the Mercury runtime; `main/2' is the entry point to the user's program.)

Each entry of the call graph profile consists of three sections, the parent procedures, the current procedure and the children procedures.

Reading across from the left, for the current procedure the fields are:

The predicate or function names are not just followed by their arity but also by their mode in brackets. A mode of zero corresponds to the first mode declaration of that predicate in the source code. For example, `list__delete/3(3)' corresponds to the `(out, out, in)' mode of `list__delete/3'.

Now for the parent and child procedures the self and descendent time have slightly different meanings. For the parent procedures the self and descendent time represent the proportion of the current procedure's self and descendent time due to that parent. These times are obtained using the assumption that each call contributes equally to the total time of the current procedure.

Memory profiling

It is also possible to profile memory allocation. To enable memory profiling, your program must be built with memory profiling enabled, using the `--memory-profiling' option. Then, as with time profiling, you run your program to create the profiling data. This will be stored in the files `Prof.MemoryWords' `Prof.MemoryCells', `Prof.Decls', and `Prof.CallPair'.

To create the profile, you need to invoke `mprof' with the `-m' (`--profile memory-words') option. This will profile the amount of memory allocated, measured in units of words. (A word is 4 bytes on a 32-bit architecture, and 8 bytes on a 64-bit architecture.)

Alternatively, you can use `mprof''s `-M' (`--profile memory-cells') option. This will profile memory in units of "cells". A cell is a group of words allocated together in a single allocation, to hold a single object. Selecting this option this will therefore profile the number of memory allocations, while ignoring the size of each memory allocation.

With memory profiling, just as with time profiling, you can use the `-c' (`--call-graph') option to display call graph profiles in addition to flat profiles.

Note that Mercury's memory profiler will only tell you about allocation, not about deallocation (garbage collection). It can tell you how much memory was allocated by each procedure, but it won't tell you how long the memory was live for, or how much of that memory was garbage-collected.

Invocation

This section contains a brief description of all the options available for `mmc', the Mercury compiler. Sometimes this list is a little out-of-date; use `mmc --help' to get the most up-to-date list.

Invocation overview

mmc is invoked as

mmc [options] arguments

Arguments can be either module names or file names. Arguments ending in `.m' are assumed to be file names, while other arguments are assumed to be module names, with `.' (rather than `__' or `:') as module qualifier. If you specify a module name such as `foo.bar.baz', the compiler will look for the source in files `foo.bar.baz.m', `bar.baz.m', and `baz.m', in that order.

Options are either short (single-letter) options preceded by a single `-', or long options preceded by `--'. Options are case-sensitive. We call options that do not take arguments flags. Single-letter flags may be grouped with a single `-', e.g. `-vVc'. Single-letter flags may be negated by appending another trailing `-', e.g. `-v-'. Long flags may be negated by preceding them with `no-', e.g. `--no-verbose'.

Warning options

-w
--inhibit-warnings
Disable all warning messages.

--halt-at-warn
This option causes the compiler to treat all warnings as if they were errors. This means that if any warning is issued, the compiler will not generate code -- instead, it will return a non-zero exit status.

--halt-at-syntax-error
This option causes the compiler to halt immediately after syntax checking and not do any semantic checking if it finds any syntax errors in the program.

--no-warn-singleton-variables
Don't warn about variables which only occur once.

--no-warn-missing-det-decls
For predicates that are local to a module (those that are not exported), don't issue a warning if the `pred' or `mode' declaration does not have a determinism annotation. Use this option if you want the compiler to perform automatic determinism inference for non-exported predicates.

--no-warn-det-decls-too-lax
Don't warn about determinism declarations which could have been stricter.

--no-warn-nothing-exported
Don't warn about modules whose interface sections have no exported predicates, functions, insts, modes or types.

--warn-unused-args
Warn about predicate or function arguments which are not used.

--warn-interface-imports
Warn about modules imported in the interface which are not used in the interface.

--warn-missing-opt-files
Warn about `.opt' files that cannot be opened.

--warn-missing-trans-opt-files
Warn about `.trans_opt' files that cannot be opened.

--warn-non-stratification
Warn about possible non-stratification of the predicates/functions in the module. Non-stratification occurs when a predicate/function can call itself negatively through some path along its call graph.

--no-warn-simple-code
Disable warnings about constructs which are so simple that they are likely to be programming errors.

--warn-duplicate-calls
Warn about multiple calls to a predicate with the same input arguments.

--no-warn-missing-module-name
Disable warnings for modules that do not start with a `:- module' declaration.

--no-warn-wrong-module-name
Disable warnings for modules whose `:- module' declaration does not match the module's file name.

Verbosity options

-v
--verbose
Output progress messages at each stage in the compilation.

-V
--very-verbose
Output very verbose progress messages.

-E
--verbose-error-messages
Explain error messages. Asks the compiler to give you a more detailed explanation of any errors it finds in your program.

-S
--statistics
Output messages about the compiler's time/space usage. At the moment this option implies `--no-trad-passes', so you get information at the boundaries between phases of the compiler.

-T
--debug-types
Output detailed debugging traces of the type checking.

-N
--debug-modes
Output detailed debugging traces of the mode checking.

--debug-det, --debug-determinism
Output detailed debugging traces of determinism analysis.

--debug-opt
Output detailed debugging traces of the optimization process.

--debug-vn <n>
Output detailed debugging traces of the value numbering optimization pass. The different bits in the number argument of this option control the printing of different types of tracing messages.

--debug-pd
Output detailed debugging traces of the partial deduction and deforestation process.

Output options

These options are mutually exclusive. If more than one of these options is specified, only the first in this list will apply. If none of these options are specified, the default action is to compile and link the modules named on the command line to produce an executable.

-M
--generate-dependencies
Output "Make"-style dependencies for the module and all of its dependencies to `module.dep', `module.dv' and the relevant `.d' files.
--generate-module-order
Output the strongly connected components of the module dependency graph in top-down order to `module.order'. Implies `--generate-dependencies'.

-i
--make-int
--make-interface
Write the module interface to `module.int'. Also write the short interface to `module.int2'.

--make-short-int
--make-short-interface
Write the unqualified version of the short interface to `module.int3'.

--make-priv-int
--make-private-interface
Write the module's private interface (used for compiling nested sub-modules) to `module.int0'.

--make-opt-int
--make-optimization-interface
Write information used for inter-module optimization to `module.opt'.

--make-trans-opt
--make-transitive-optimization-interface
Write the `module.trans_opt' file. This file is used to store information used for inter-module optimization. The information is read in when the compiler is invoked with the `--transitive-intermodule-optimization' option. The file is called the "transitive" optimization interface file because a `.trans_opt' file may depend on other `.trans_opt' and `.opt' files. In contrast, a `.opt' file can only hold information derived directly from the corresponding `.m' file.

-G
--convert-to-goedel
Convert the Mercury code to Goedel. Output to file `module.loc'. The translation is not perfect; some Mercury constructs cannot be easily translated into Goedel.

-P
--pretty-print
--convert-to-mercury
Convert to Mercury. Output to file `module.ugly'. This option acts as a Mercury ugly-printer. (It would be a pretty-printer, except that comments are stripped and nested if-then-elses are indented too much -- so the result is rather ugly.)

--typecheck-only
Just check the syntax and type-correctness of the code. Don't invoke the mode analysis and later passes of the compiler. When converting Prolog code to Mercury, it can sometimes be useful to get the types right first and worry about modes second; this option supports that approach.

-e
--errorcheck-only
Check the module for errors, but do not generate any code.

-C
--compile-to-c
--compile-to-C
Generate C code in `module.c', but not object code.

-c
--compile-only
Generate C code in `module.c' and object code in `module.o' but do not attempt to link the named modules.

--output-grade-string
Compute from the rest of the option settings the canonical grade string and print it on the standard output.

Auxiliary output options

--no-assume-gmake
When generating `.d', `.dep' and `.dv' files, generate Makefile fragments that use only the features of standard make; do not assume the availability of GNU Make extensions. This can make these files significantly larger.
--trace-level level
Generate code that includes the specified level of execution tracing. The level should be one of `none', `shallow', `deep', and `default'. See section Debugging.
--no-trace-internal
Do not generate code for internal events even if the trace level is deep.
--no-trace-return
Do not generate trace information for call return sites. Prevents the printing of the values of variables in ancestors of the current call.
--no-trace-redo
Do not generate code for REDO events.
--trace-optimized
Do not disable optimizations that can change the trace.
--stack-trace-higher-order
Enable stack traces through predicates and functions with higher-order arguments, even if stack tracing is not supported in general.
--generate-bytecode
Output a bytecode form of the module for use by an experimental debugger.
--auto-comments
Output comments in the `module.c' file. This is primarily useful for trying to understand how the generated C code relates to the source code, e.g. in order to debug the compiler. The code may be easier to understand if you also use the `--no-llds-optimize' option.

--no-line-numbers
Do not put source line numbers in the generated code. The generated code may be in C (the usual case), in Goedel (with `--convert-to-goedel') or in Mercury (with `--convert-to-mercury').

--show-dependency-graph
Write out the dependency graph to module.dependency_graph.

-d stage
--dump-hlds stage
Dump the HLDS (intermediate representation) after the specified stage number or stage name to `module.hlds_dump.num-name'. Stage numbers range from 1 to 99; not all stage numbers are valid. The special stage name `all' causes the dumping of all stages. Multiple dump options accumulate.

--dump-hlds-options options
With `--dump-hlds', include extra detail in the dump. Each type of detail is included in the dump if its corresponding letter occurs in the option argument. These details are: a - argument modes in unifications, b - builtin flags on calls, c - contexts of goals and types, d - determinism of goals, f - follow_vars sets of goals, g - goal feature lists, i - instmap deltas of goals, l - pred/mode ids and unify contexts of called predicates, m - mode information about clauses, n - nonlocal variables of goals, p - pre-birth, post-birth, pre-death and post-death sets of goals, r - resume points of goals, s - store maps of goals, t - results of termination analysis, u - unification categories, v - variable numbers in variable names, C - clause information, I - imported predicates, M - mode and inst information, P - path information, T - type and typeclass information, U - unify predicates.

Language semantics options

See the Mercury language reference manual for detailed explanations of these options.

--no-reorder-conj
Execute conjunctions left-to-right except where the modes imply that reordering is unavoidable.

--no-reorder-disj
Execute disjunctions strictly left-to-right.

--fully-strict
Don't optimize away loops or calls to error/1.

--infer-types
If there is no type declaration for a predicate or function, try to infer the type, rather than just reporting an error.

--infer-modes
If there is no mode declaration for a predicate, try to infer the modes, rather than just reporting an error.

--no-infer-det, --no-infer-determinism
If there is no determinism declaration for a procedure, don't try to infer the determinism, just report an error.

--type-inference-iteration-limit n
Perform at most n passes of type inference (default: 60).

--mode-inference-iteration-limit n
Perform at most n passes of mode inference (default: 30).

Termination analysis options

For detailed explanations, see the "Termination analysis" section of the "Implementation-dependent extensions" chapter in the Mercury Language Reference Manual.

--enable-term
--enable-termination
Enable termination analysis. Termination analysis analyses each mode of each predicate to see whether it terminates. The `terminates', `does_not_terminate' and `check_termination' pragmas have no effect unless termination analysis is enabled. When using termination, `--intermodule-optimization' should be enabled, as it greatly improves the accuracy of the analysis.

--chk-term
--check-term
--check-termination
Enable termination analysis, and emit warnings for some predicates or functions that cannot be proved to terminate. In many cases in which the compiler is unable to prove termination, the problem is either a lack of information about the termination properties of other predicates, or the fact that the program used language constructs (such as higher order calls) which cannot be analysed. In these cases the compiler does not emit a warning of non-termination, as it is likely to be spurious.

--verb-chk-term
--verb-check-term
--verbose-check-termination
Enable termination analysis, and emit warnings for all predicates or functions that cannot be proved to terminate.

--term-single-arg limit
--termination-single-argument-analysis limit
When performing termination analysis, try analyzing recursion on single arguments in strongly connected components of the call graph that have up to limit procedures. Setting this limit to zero disables single argument analysis.

--termination-norm norm
The norm defines how termination analysis measures the size of a memory cell. The `simple' norm says that size is always one. The `total' norm says that it is the number of words in the cell. The `num-data-elems' norm says that it is the number of words in the cell that contain something other than pointers to cells of the same type.

--term-err-limit limit
--termination-error-limit limit
Print at most n reasons for any single termination error.

--term-path-limit limit
--termination-path-limit limit
Perform termination analysis only on predicates with at most n paths.

Compilation model options

The following compilation options affect the generated code in such a way that the entire program must be compiled with the same setting of these options, and it must be linked to a version of the Mercury library which has been compiled with the same setting. (Attempting to link object files compiled with different settings of these options will generally result in an error at link time, typically of the form `undefined symbol MR_grade_...' or `symbol MR_runtime_grade multiply defined'.)

The options below must be passed to `mgnuc', `c2init' and `ml' as well as to `mmc'. If you are using Mmake, then you should specify these options in the `GRADEFLAGS' variable rather than specifying them in `MCFLAGS', `MGNUCFLAGS', `C2INITFLAGS' and `MLFLAGS'.

-s grade
--grade grade
Select the compilation model. The grade should be a `.' separated list of the grade options to set. The grade options may be given in any order. The available options each belong to a set of mutually exclusive alternatives governing a single aspect of the compilation model. The set of aspects and their alternatives are:
What combination of GNU-C extensions to use:
`none', `reg', `jump', `asm_jump', `fast', and `asm_fast' (the default is system dependent).
What garbage collection strategy to use:
`gc', and `agc' (the default is no garbage collection).
What kind of profiling to use:
`prof', and `memprof' (the default is no profiling).
Whether to enable the trail:
`tr' (the default is no trailing).
What debugging features to enable:
`debug' (the default is no debugging features).
Whether to use a thread-safe version of the runtime environment:
`par' (the default is a non-thread-safe environment).
The default grade is system-dependent; it is chosen at installation time by `configure', the auto-configuration script, but can be overridden with the environment variable `MERCURY_DEFAULT_GRADE' if desired. Depending on your particular installation, only a subset of these possible grades will have been installed. Attempting to use a grade which has not been installed will result in an error at link time. (The error message will typically be something like `ld: can't find library for -lmercury'.) The tables below show the options that are selected by each base grade and grade modifier; they are followed by descriptions of those options.
Grade
Options implied.
`none'
--no-gcc-global-registers --no-gcc-nonlocal_gotos --no-asm-labels.
`reg'
--gcc-global-registers --no-gcc-nonlocal_gotos --no-asm-labels.
`jump'
--no-gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels.
`fast'
--gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels.
`asm_jump'
--no-gcc-global-registers --gcc-nonlocal-gotos --asm-labels.
`asm_fast'
--gcc-global-registers --gcc-nonlocal_gotos --asm-labels.
`.gc'
--gc conservative.
`.agc'
--gc accurate.
`.prof'
--profiling.
`.memprof'
--memory-profiling.
`.tr'
--use-trail.
`.debug'
--debug.

--gcc-global-registers (grades: reg, fast, asm_fast)
--no-gcc-global-registers (grades: none, jump, asm_jump)
Specify whether or not to use GNU C's global register variables extension.

--gcc-non-local-gotos (grades: jump, fast, asm_jump, asm_fast)
--no-gcc-non-local-gotos (grades: none, reg)
Specify whether or not to use GNU C's "labels as values" extension.

--asm-labels (grades: asm_jump, asm_fast)
--no-asm-labels (grades: none, reg, jump, fast)
Specify whether or not to use GNU C's asm extensions for inline assembler labels.

--gc {none, conservative, accurate}
--garbage-collection {none, conservative, accurate}
Specify which method of garbage collection to use. Grades containing `.gc' use `--gc conservative', other grades use `--gc none'. `accurate' is not yet implemented.

--profiling, --time-profiling (grades: any grade containing `.prof')
Enable time profiling. Insert profiling hooks in the generated code, and also output some profiling information (the static call graph) to the file `module.prof'. See section Profiling.

--memory-profiling (grades: any grade containing `.memprof')
Enable memory profiling. Insert memory profiling hooks in the generated code, and also output some profiling information (the static call graph) to the file `module.prof'. See section Memory profiling.

--debug (grades: any grade containing `.debug')
Enables the inclusion in the executable of code and data structures that allow the program to be debugged with `mdb' (see section Debugging).

--pic-reg (grades: any grade containing `.pic_reg')
[For Unix with intel x86 architecture only.] Select a register usage convention that is compatible with position-independent code (gcc's `-fpic' option). This is necessary when using shared libraries on Intel x86 systems running Unix. On other systems it has no effect.

Developer compilation model options

Of the options listed below, the `--num-tag-bits' option may be useful for cross-compilation, but apart from that these options are all experimental and are intended for use by developers of the Mercury implementation rather than by ordinary Mercury programmers.

--tags {none, low, high}
(This option is not intended for general use.)
Specify whether to use the low bits or the high bits of each word as tag bits (default: low).

--num-tag-bits n
(This option is not intended for general use.)
Use n tag bits. This option is required if you specify `--tags high'. With `--tags low', the default number of tag bits to use is determined by the auto-configuration script.

--no-type-layout
(This option is not intended for general use.)
Don't output base_type_layout structures or references to them. This option will generate smaller executables, but will not allow the use of code that uses the layout information (e.g. `functor', `arg'). Using such code will result in undefined behaviour at runtime. The C code also needs to be compiled with `-DNO_TYPE_LAYOUT'.

Code generation options

--low-level-debug
Enables various low-level debugging stuff that was in the distant past used to debug the Mercury compiler's low-level code generation. This option is not likely to be useful to anyone except the Mercury implementors. It causes the generated code to become very big and very inefficient, and slows down compilation a lot.

--no-trad-passes
The default `--trad-passes' completely processes each predicate before going on to the next predicate. This option tells the compiler to complete each phase of code generation on all predicates before going on the next phase on all predicates.

--no-reclaim-heap-on-nondet-failure
Don't reclaim heap on backtracking in nondet code.

--no-reclaim-heap-on-semidet-failure
Don't reclaim heap on backtracking in semidet code.

--no-reclaim-heap-on-failure
Combines the effect of the two options above.

--cc compiler-name
Specify which C compiler to use.

--c-include-directory dir
Specify the directory containing the Mercury C header files.

--cflags options
Specify options to be passed to the C compiler.

--c-debug
Pass the `-g' flag to the C compiler, to enable debugging of the generated C code, and also pass `--no-strip' to the Mercury linker, to tell it not to strip the C debugging information. Since the generated C code is very low-level, this option is not likely to be useful to anyone except the Mercury implementors, except perhaps for debugging code that uses Mercury's C interface extensively.

--fact-table-max-array-size size
Specify the maximum number of elements in a single `pragma fact_table' data array (default: 1024). The data for fact tables is placed into multiple C arrays, each with a maximum size given by this option. The reason for doing this is that most C compilers have trouble compiling very large arrays.

--fact-table-hash-percent-full percentage
Specify how full the `pragma fact_table' hash tables should be allowed to get. Given as an integer percentage (valid range: 1 to 100, default: 90). A lower value means that the compiler will use larger tables, but there will generally be less hash collisions, so it may result in faster lookups.

Code generation target options

The following options allow the Mercury compiler to optimize the generated C code based on the characteristics of the expected target architecture. The default values of these options will be whatever is appropriate for the host architecture that the Mercury compiler was installed on, so normally there is no need to set these options manually. They might come in handy if you are cross-compiling. But even when cross-compiling, it's probably not worth bothering to set these unless efficiency is absolutely paramount.

--have-delay-slot
(This option is not intended for general use.)
Assume that branch instructions have a delay slot.

--num-real-r-regs n
(This option is not intended for general use.)
Assume r1 up to rn are real general purpose registers.

--num-real-f-regs n
(This option is not intended for general use.)
Assume f1 up to fn are real floating point registers.

--num-real-r-temps n
(This option is not intended for general use.)
Assume that n non-float temporaries will fit into real machine registers.

--num-real-f-temps n
(This option is not intended for general use.)
Assume that n float temporaries will fit into real machine registers.

Optimization options

Overall optimization options

-O n
--opt-level n
--optimization-level n
Set optimization level to n. Optimization levels range from -1 to 6. Optimization level -1 disables all optimizations, while optimization level 6 enables all optimizations except for the cross-module optimizations listed below. In general, there is a trade-off between compilation speed and the speed of the generated code. When developing, you should normally use optimization level 0, which aims to minimize compilation time. It enables only those optimizations that in fact usually reduce compilation time. The default optimization level is level 2, which delivers reasonably good optimization in reasonable time. Optimization levels higher than that give better optimization, but take longer, and are subject to the law of diminishing returns. The difference in the quality of the generated code between optimization level 5 and optimization level 6 is very small, but using level 6 may increase compilation time and memory requirements dramatically. Note that if you want the compiler to perform cross-module optimizations, then you must enable them separately; the cross-module optimizations are not enabled by any `-O' level, because they affect the compilation process in ways that require special treatment by `mmake'.

--opt-space
--optimize-space
Turn on optimizations that reduce code size and turn off optimizations that significantly increase code size.

--intermodule-optimization
Perform inlining and higher-order specialization of the code for predicates or functions imported from other modules.

--trans-intermod-opt
--transitive-intermodule-optimization
Use the information stored in `module.trans_opt' files to make intermodule optimizations. The `module.trans_opt' files are different to the `module.opt' files as `.trans_opt' files may depend on other `.trans_opt' files, whereas each `.opt' file may only depend on the corresponding `.m' file.
--use-opt-files
Perform inter-module optimization using any `.opt' files which are already built, e.g. those for the standard library, but do not build any others.
--use-trans-opt-files
Perform inter-module optimization using any `.trans_opt' files which are already built, e.g. those for the standard library, but do not build any others.

--split-c-files
Generate each C function in its own C file, so that the linker will optimize away unused code. This has the same effect as `--optimize-dead-procs', except that it works globally at link time, rather than over a single module, so it does a much better job of eliminating unused procedures. This option significantly increases compilation time, link time, and intermediate disk space requirements, but in return reduces the size of the final executable, typically by about 10-20%. This option is only useful with `--procs-per-c-function 1'. N.B. When using `mmake', the `--split-c-files' option should not be placed in the `MCFLAGS' variable. Instead, use the `MODULE.split' target, i.e. type `mmake foo.split' rather than `mmake foo'.

High-level (HLDS -> HLDS) optimization options

These optimizations are high-level transformations on our HLDS (high-level data structure).

--no-inlining
Disable all forms of inlining.
--no-inline-simple
Disable the inlining of simple procedures.
--no-inline-single-use
Disable the inlining of procedures called only once.
--inline-compound-threshold threshold
Inline a procedure if its size (measured roughly in terms of the number of connectives in its internal form), multiplied by the number of times it is called, is below the given threshold.
--inline-simple-threshold threshold
Inline a procedure if its size is less than the given threshold.
--intermod-inline-simple-threshold threshold
Similar to --inline-simple-threshold, except used to determine which predicates should be included in `.opt' files. Note that changing this between writing the `.opt' file and compiling to C may cause link errors, and too high a value may result in reduced performance.

--no-common-struct
Disable optimization of common term structures.

--no-common-goal
Disable optimization of common goals. At the moment this optimization detects only common deconstruction unifications. Disabling this optimization reduces the class of predicates that the compiler considers to be deterministic.

--no-follow-code
Don't migrate builtin goals into branched goals.

--optimize-unused-args
Remove unused predicate arguments. The compiler will generate more efficient code for polymorphic predicates.

--intermod-unused-args
Perform unused argument removal across module boundaries. This option implies `--optimize-unused-args' and `--intermodule-optimization'.

--optimize-higher-order
Specialize calls to higher-order predicates where the higher-order arguments are known.

--type-specialization
Specialize calls to polymorphic predicates where the polymorphic types are known.

--user-guided-type-specialization
Enable specialization of polymorphic predicates for which there are `:- pragma type_spec' declarations. See the "Type specialization" section in the "Pragmas" chapter of the Mercury Language Reference Manual for more details.

--higher-order-size-limit
Set the maximum goal size of specialized versions created by `--optimize-higher-order' and `--type-specialization'. Goal size is measured as the number of calls, unifications and branched goals.

--optimize-constant-propagation
Evaluate constant expressions at compile time.

--introduce-accumulators
Attempt to introduce accumulating variables into procedures, so as to make the procedure tail recursive.

--optimize-constructor-last-call
Enable the optimization of "last" calls that are followed by constructor application.

--optimize-dead-procs
Enable dead predicate elimination.

--excess-assign
Remove excess assignment unifications.

--optimize-duplicate-calls
Optimize away multiple calls to a predicate with the same input arguments.

--optimize-saved-vars
Reorder goals to minimize the number of variables that have to be saved across calls.

--deforestation
Enable deforestation. Deforestation is a program transformation whose aim is to avoid the construction of intermediate data structures and to avoid repeated traversals over data structures within a conjunction.

Medium-level (HLDS -> LLDS) optimization options

These optimizations are applied during the process of generating low-level intermediate code from our high-level data structure.

--no-static-ground-terms
Disable the optimization of constructing constant ground terms at compile time and storing them as static constants. Note that auxiliary data structures created by the compiler for purposes such as debugging will still be created as static constants.

--no-smart-indexing
Generate switches as a simple if-then-else chains; disable string hashing and integer table-lookup indexing.

--dense-switch-req-density percentage
The jump table generated for an atomic switch must have at least this percentage of full slots (default: 25).

--dense-switch-size size
The jump table generated for an atomic switch must have at least this many entries (default: 4).

--lookup-switch-req-density percentage
The lookup tables generated for an atomic switch in which all the outputs are constant terms must have at least this percentage of full slots (default: 25).

--lookup-switch-size size
The lookup tables generated for an atomic switch in which all the outputs are constant terms must have at least this many entries (default: 4).

--string-switch-size size
The hash table generated for a string switch must have at least this many entries (default: 8).

--tag-switch-size size
The number of alternatives in a tag switch must be at least this number (default: 3).

--try-switch-size size
The number of alternatives in a try-chain switch must be at least this number (default: 3).

--binary-switch-size size
The number of alternatives in a binary search switch must be at least this number (default: 4).

--no-middle-rec
Disable the middle recursion optimization.

--no-simple-neg
Don't generate simplified code for simple negations.

--no-follow-vars
Don't optimize the assignment of registers in branched goals.

Low-level (LLDS -> LLDS) optimization options

These optimizations are transformations that are applied to our low-level intermediate code before emitting C code.

--no-common-data
Disable optimization of common data structures.
--no-llds-optimize
Disable the low-level optimization passes.

--no-optimize-peep
Disable local peephole optimizations.

--no-optimize-jumps
Disable elimination of jumps to jumps.

--no-optimize-fulljumps
Disable elimination of jumps to ordinary code.

--checked-nondet-tailcalls
Convert nondet calls into tail calls whenever possible, even when this requires a runtime check. This option tries to minimize stack consumption, possibly at the expense of speed.

--no-optimize-labels
Disable elimination of dead labels and code.

--optimize-dups
Enable elimination of duplicate code.

--optimize-value-number
Perform value numbering on extended basic blocks.

--pred-value-number
Extend value numbering to whole procedures, rather than just basic blocks.

--no-optimize-frames
Disable stack frame optimizations.

--no-optimize-delay-slot
Disable branch delay slot optimizations.

--optimize-repeat n
Iterate most optimizations at most n times (default: 3).

--optimize-vnrepeat n
Iterate value numbering at most n times (default: 1).

Output-level (LLDS -> C) optimization options

These optimizations are applied during the process of generating C intermediate code from our low-level data structure.

--no-emit-c-loops
Use only gotos -- don't emit C loop constructs.

--use-macro-for-redo-fail
Emit the fail or redo macro instead of a branch to the fail or redo code in the runtime system.

--procs-per-c-function n
Don't put the code for more than n Mercury procedures in a single C function. The default value of n is one. Increasing n can produce slightly more efficient code, but makes compilation slower. Setting n to the special value zero has the effect of putting all the procedures in a single function, which produces the most efficient code but tends to severely stress the C compiler.

Object-level (C -> object code) optimization options

These optimizations are applied during the process of compiling the generated C code to machine code object files.

If you are using Mmake, you need to pass these options to `mgnuc' rather than to `mmc'.

--no-c-optimize
Don't enable the C compiler's optimizations.

--inline-alloc
Inline calls to `GC_malloc()'. This can improve performance a fair bit, but may significantly increase code size. This option has no effect if `--gc conservative' is not set or if the C compiler is not GNU C.

Miscellaneous options

-I dir
--search-directory dir
Append dir to the list of directories to be searched for imported modules.
--intermod-directory dir
Append dir to the list of directories to be searched for `.opt' files.
--use-search-directories-for-intermod
Append the arguments of all -I options to the list of directories to be searched for `.opt' files.
--use-subdirs
Create intermediate files in a `Mercury' subdirectory, rather than in the current directory.

-?
-h
--help
Print a usage message.

--filenames-from-stdin
Read then compile a newline terminated module name or file name from the standard input. Repeat this until EOF is reached. (This allows a program or user to interactively compile several modules without the overhead of process creation for each one.)

Link options

-o filename
--output-file filename
Specify the name of the final executable. (The default executable name is the same as the name of the first module on the command line, but without the `.m' extension.)

--link-flags options
Specify options to be passed to `ml', the Mercury linker.

-L directory
--library-directory directory
Append dir to the list of directories in which to search for libraries.
-l library
--library library
Link with the specified library.
--link-object object
Link with the specified object file.

Environment variables

The shell scripts in the Mercury compilation environment will use the following environment variables if they are set. There should be little need to use these, because the default values will generally work fine.

MERCURY_DEFAULT_GRADE
The default grade to use if no `--grade' option is specified.

MERCURY_C_INCL_DIR
Directory for the C header files for the Mercury runtime system (`*.h'). This environment variable is used only to define the default value of MERCURY_ALL_C_INCL_DIRS, so if you define that environment variable separately, the value of MERCURY_C_INCL_DIR is ignored.

MERCURY_ALL_C_INCL_DIRS
A list of options for the C compiler that specifies all the directories the C compiler should search for the C header files of the Mercury runtime system and garbage collector. The default value of this option is -I$MERCURY_C_INCL_DIR, since usually all these header files are installed in one directory.

MERCURY_INT_DIR
Directory for the Mercury library interface files (`*.int', `*.int2', `*.int3' and `*.opt').

MERCURY_NC_BUILTIN
Filename of the Mercury `nc'-compatibility file (nc_builtin.nl).

MERCURY_C_LIB_DIR
Base directory containing the Mercury libraries (`libmer.a' and possibly `libmer.so') for each configuration and grade. The libraries for each configuration and grade should be in the subdirectory config/grade of $MERCURY_C_LIB_DIR.

MERCURY_NONSHARED_LIB_DIR
For IRIX 5, this environment variable can be used to specify a directory containing a version of libgcc.a which has been compiled with `-mno-abicalls'. See the file `README.IRIX-5' in the Mercury source distribution.

MERCURY_MOD_LIB_DIR
The directory containing the .init files in the Mercury library. They are used to create the initialization file `*_init.c'.

MERCURY_MOD_LIB_MODS
The names of the .init files in the Mercury library.

MERCURY_COMPILER
Filename of the Mercury Compiler.

MERCURY_INTERPRETER
Filename of the Mercury Interpreter.

MERCURY_MKINIT
Filename of the program to create the `*_init.c' file.

MERCURY_DEBUGGER_INIT
Name of a file that contains startup commands for the Mercury debugger. This file should contain documentation for the debugger command set, and possibly a set of default aliases.

MERCURY_OPTIONS
A list of options for the Mercury runtime that gets linked into every Mercury program. Their meanings are as follows.

-C size
Tells the runtime system to optimize the locations of the starts of the various data areas for a primary data cache of size kilobytes. The optimization consists of arranging the starts of the areas to differ as much as possible modulo this size.

-D debugger
Enables execution tracing of the program, via the internal debugger if debugger is `i' and via the external debugger if debugger is `e'. (The mdb script works by including `-Di' in MERCURY_OPTIONS.) The external debugger is not yet available.

-p
Disables profiling. This only has an effect if the executable was built in a profiling grade.

-P num
Tells the runtime system to use num threads if the program was built in a parallel grade.

-T time-method
If the executable was compiled in a grade that includes time profiling, this option specifies what time is counted in the profile. time-method must have one of the following values:

`r'
Profile real (elapsed) time (using ITIMER_REAL).
`p'
Profile user time plus system time (using ITIMER_PROF). This is the default.
`v'
Profile user time (using ITIMER_VIRTUAL).

Currently, the `-Tp' and `-Tv' options don't work on Windows, so on Windows you must explicitly specify `-Tr'.

--heap-size size
Sets the size of the heap to size kilobytes.

--detstack-size size
Sets the size of the det stack to size kilobytes.

--nondetstack-size size
Sets the size of the nondet stack to size kilobytes.

--trail-size size
Sets the size of the trail to size kilobytes.

-i filename
--mdb-in filename
Read debugger input from the file or device specified by filename, rather than from standard input.

-o filename
--mdb-out filename
Print debugger output to the file or device specified by filename, rather than to standard output.

-e filename
--mdb-err filename
Print debugger error messages to the file or device specified by filename, rather than to standard error.

-m filename
--mdb-tty filename
Redirect all three debugger I/O streams -- input, output, and error messages -- to the file or device specified by filename.

Using a different C compiler

The Mercury compiler takes special advantage of certain extensions provided by GNU C to generate much more efficient code. We therefore recommend that you use GNU C for compiling Mercury programs. However, if for some reason you wish to use another compiler, it is possible to do so. Here's what you need to do.