Copyright (C) 1995-2001 The University of Melbourne.
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions.
This document describes the compilation environment of Mercury. It describes how to use `mmc', the Mercury compiler; how to use `mmake', the "Mercury make" program, a tool built on top of ordinary or GNU make to simplify the handling of Mercury programs; how to use `mdb', the Mercury debugger; and how to use `mprof', the Mercury profiler.
We strongly recommend that programmers use `mmake' rather than invoking `mmc' directly, because `mmake' is generally easier to use and avoids unnecessary recompilation.
Mercury source files must be named `*.m'. Each Mercury source file should contain a single Mercury module whose module name should be the same as the filename without the `.m' extension.
The Mercury implementation uses a variety of intermediate files, which are described below. But all you really need to know is how to name source files. For historical reasons, the default behaviour is for intermediate files to be created in the current directory, but if you use the `--use-subdirs' option to `mmc' or `mmake', all these intermediate files will be created in a `Mercury' subdirectory, where you can happily ignore them. Thus you may wish to skip the rest of this chapter.
In cases where the source file name and module name don't match, the names for intermediate files are based on the name of the module from which they are derived, not on the source file name.
Files ending in `.int', `.int0', `.int2' and `.int3' are interface files; these are generated automatically by the compiler, using the `--make-interface' (or `--make-int'), `--make-private-interface' (or `--make-priv-int'), `--make-short-interface' (or `--make-short-int') options. Files ending in `.opt' are interface files used in inter-module optimization, and are created using the `--make-optimization-interface' (or `--make-opt-int') option. Similarly, files ending in `.trans_opt' are interface files used in transitive inter-module optimization, and are created using the `--make-transitive-optimization-interface' (or `--make-trans-opt-int') option.
Since the interface of a module changes less often than its implementation, the `.int', `.int0', `.int2', `.int3', `.opt', and `.trans_opt' files will remain unchanged on many compilations. To avoid unnecessary recompilations of the clients of the module, the timestamps on the these files are updated only if their contents change. `.date', `.date0', `.date3', `.optdate', and `.trans_opt_date' files associated with the module are used as date stamps; they are used when deciding whether the interface files need to be regenerated.
Files ending in `.d' are automatically-generated Makefile fragments which contain the dependencies for a module. Files ending in `.dep' are automatically-generated Makefile fragments which contain the rules for an entire program. Files ending in `.dv' are automatically-generated Makefile fragments which contain variable definitions for an entire program.
As usual, `.c' files are C source code, `.h' files are C header files, `.o' files are object code, In addition, `.pic_o' files are object code files that contain position-independent code (PIC).
Files ending in `.rlo' are Aditi-RL bytecode files, which are executed by the Aditi deductive database system (see section Using Aditi).
Following a long Unix tradition, the Mercury compiler is called `mmc' (for "Melbourne Mercury Compiler"). Some of its options (e.g. `-c', `-o', and `-I') have a similar meaning to that in other Unix compilers.
Arguments to `mmc' may be either file names (ending in `.m'), or module names, with `.' (rather than `__' or `:') as the module qualifier. For a module name such as `foo.bar.baz', the compiler will look for the source in files `foo.bar.baz.m', `bar.baz.m', and `baz.m', in that order. Note that if the file name does not include all the module qualifiers (e.g. if it is `bar.baz.m' or `baz.m' rather than `foo.bar.baz.m'), then the module name in the `:- module' declaration for that module must be fully qualified.
To compile a program which consists of just a single source file, use the command
mmc filename.m
Unlike traditional Unix compilers, however, `mmc' will put the executable into a file called `filename', not `a.out'.
For programs that consist of more than one source file, we strongly recommend that you use Mmake (see section Using Mmake). Mmake will perform all the steps listed below, using automatic dependency analysis to ensure that things are done in the right order, and that steps are not repeated unnecessarily. If you use Mmake, then you don't need to understand the details of how the Mercury implementation goes about building programs. Thus you may wish to skip the rest of this chapter.
To compile a source file to object code without creating an executable, use the command
mmc -c filename.m
`mmc' will put the object code into a file called `module.o', where module is the name of the Mercury module defined in `filename.m'. It also will leave the intermediate C code in a file called `module.c'. If the source file contains nested modules, then each sub-module will get compiled to separate C and object files.
Before you can compile a module, you must make the interface files for the modules that it imports (directly or indirectly). You can create the interface files for one or more source files using the following commands:
mmc --make-short-int filename1.m filename2.m ... mmc --make-priv-int filename1.m filename2.m ... mmc --make-int filename1.m filename2.m ...
If you are going to compile with `--intermodule-optimization' enabled, then you also need to create the optimization interface files.
mmc --make-opt-int filename1.m filename2.m ...
If you are going to compile with `--transitive-intermodule-optimization' enabled, then you also need to create the transitive optimization files.
mmc --make-trans-opt filename1.m filename2.m ...
Given that you have made all the interface files, one way to create an executable for a multi-module program is to compile all the modules at the same time using the command
mmc filename1.m filename2.m ...
This will by default put the resulting executable in `filename1', but you can use the `-o filename' option to specify a different name for the output file, if you so desire.
The other way to create an executable for a multi-module program is to compile each module separately using `mmc -c', and then link the resulting object files together. The linking is a two stage process.
First, you must create and compile an initialization file, which is a C source file containing calls to automatically generated initialization functions contained in the C code of the modules of the program:
c2init module1.c module2.c ... > main-module_init.c, mgnuc -c main-module_init.c
The `c2init' command line must contain the name of the C file of every module in the program. The order of the arguments is not important. The `mgnuc' command is the Mercury GNU C compiler; it is a shell script that invokes the GNU C compiler `gcc' with the options appropriate for compiling the C programs generated by Mercury.
You then link the object code of each module with the object code of the initialization file to yield the executable:
ml -o main-module module1.o module2.o ... main_module_init.o
`ml', the Mercury linker, is another shell script that invokes a C compiler with options appropriate for Mercury, this time for linking. `ml' also pipes any error messages from the linker through `mdemangle', the Mercury symbol demangler, so that error messages refer to predicate and function names from the Mercury source code rather than to the names used in the intermediate C code.
The above command puts the executable in the file `main-module'. The same command line without the `-o' option would put the executable into the file `a.out'.
`mmc' and `ml' both accept a `-v' (verbose) option. You can use that option to see what is actually going on. For the full set of options of `mmc', see section Invocation.
Once you have created an executable for a Mercury program, you can go ahead and execute it. You may however wish to specify certain options to the Mercury runtime system. The Mercury runtime accepts options via the `MERCURY_OPTIONS' environment variable. The most useful of these are the options that set the size of the stacks. (For the full list of available options, see section Environment variables.)
The det stack and the nondet stack are allocated fixed sizes at program start-up. The default size is 4096k for the det stack and 128k for the nondet stack, but these can be overridden with the `--detstack-size' and `--nondetstack-size' options, whose arguments are the desired sizes of the det and nondet stacks respectively, in units of kilobytes. On operating systems that provide the appropriate support, the Mercury runtime will ensure that stack overflow is trapped by the virtual memory system.
With conservative garbage collection (the default), the heap will start out with a zero size, and will be dynamically expanded as needed, When not using conservative garbage collection, the heap has a fixed size like the stacks. The default size is 4 Mb, but this can be overridden with the `--heap-size' option.
Mmake, short for "Mercury Make", is a tool for building Mercury programs that is built on top of ordinary or GNU Make (1). With Mmake, building even a complicated Mercury program consisting of a number of modules is as simple as
mmake main-module.depend mmake main-module
Mmake only recompiles those files that need to be recompiled, based on automatically generated dependency information. Most of the dependencies are stored in `.d' files that are automatically recomputed every time you recompile, so they are never out-of-date. A little bit of the dependency information is stored in `.dep' and `.dv' files which are more expensive to recompute. The `mmake main-module.depend' command which recreates the `main-module.dep' and `main-module.dv' files needs to be repeated only when you add or remove a module from your program, and there is no danger of getting an inconsistent executable if you forget this step -- instead you will get a compile or link error.
`mmake' allows you to build more than one program in the same directory. Each program must have its own `.dep' and `.dv' files, and therefore you must run `mmake program.depend' for each program.
If there is a file called `Mmake' or `Mmakefile' in the current directory, Mmake will include that file in its automatically-generated Makefile. The `Mmake' file can override the default values of various variables used by Mmake's builtin rules, or it can add additional rules, dependencies, and actions.
Mmake's builtin rules are defined by the file `prefix/lib/mercury/mmake/Mmake.rules' (where prefix is `/usr/local/mercury-version' by default, and version is the version number, e.g. `0.6'), as well as the rules and variables in the automatically-generated `.dep' and `.dv' files. These rules define the following targets:
LIBGRADES
variable. It will also build and install the
necessary interface files. The variable INSTALL
specifies
the name of the command to use to install each file, by default
`cp'. The variable INSTALL_MKDIR
specifies the command to use
to create directories, by default `mkdir -p'.
For more information, see section Supporting multiple grades and architectures.
The variables used by the builtin rules (and their default values) are defined in the file `prefix/lib/mercury/mmake/Mmake.vars', however these may be overridden by user `Mmake' files. Some of the more useful variables are:
MAIN_TARGET
MC
GRADEFLAGS and EXTRA_GRADEFLAGS
mmc
, mgnuc
, ml
, and c2init
).
MCFLAGS and EXTRA_MCFLAGS
GRADEFLAGS
, not in MCFLAGS
.)
MGNUC
MGNUCFLAGS and EXTRA_MGNUCFLAGS
MS_CLFLAGS and EXTRA_MS_CLFLAGS
MS_CL_NOASM
ML
MLFLAGS and EXTRA_MLFLAGS
GRADEFLAGS
, not in MLFLAGS
.)
MLLIBS and EXTRA_MLLIBS
MLOBJS
C2INITFLAGS and EXTRA_C2INITFLAGS
C2INITFLAGS
- they should be
specified in GRADEFLAGS
and C2INITARGS
, respectively.)
C2INITARGS and EXTRA_C2INITARGS
C2INITFLAGS
is for)
since they are also used to derive extra dependency information.
EXTRA_LIBRARIES
EXTRA_LIB_DIRS
INSTALL_PREFIX
INSTALL
INSTALL_MKDIR
LIBGRADES
GRADEFLAGS
settings will also be applied when
the library is built in each of the listed grades, so you may not get what
you expect if those options are not subsumed by each of the grades listed.
Other variables also exist - see `prefix/lib/mercury/mmake/Mmake.vars' for a complete list.
If you wish to temporarily change the flags passed to an executable, rather than setting the various `FLAGS' variables directly, you can set an `EXTRA_' variable. This is particularly intended for use where a shell script needs to call mmake and add an extra parameter, without interfering with the flag settings in the `Mmakefile'.
For each of the variables for which there is version with an `EXTRA_' prefix, there is also a version with an `ALL_' prefix that is defined to include both the ordinary and the `EXTRA_' version. If you wish to use the values any of these variables in your Mmakefile (as opposed to setting the values), then you should use the `ALL_' version.
It is also possible to override these variables on a per-file basis.
For example, if you have a module called say `bad_style.m'
which triggers lots of compiler warnings, and you want to disable
the warnings just for that file, but keep them for all the other modules,
then you can override MCFLAGS
just for that file. This is done by
setting the variable `MCFLAGS-bad_style', as shown here:
MCFLAGS-bad_style = --inhibit-warnings
Mmake has a few options, including `--use-subdirs', `--save-makefile', `--verbose', and `--no-warn-undefined-vars'. For details about these options, see the man page or type `mmake --help'.
Finally, since Mmake is built on top of Make or GNU Make, you can also make use of the features and options supported by the underlying Make. In particular, GNU Make has support for running jobs in parallel, which is very useful if you have a machine with more than one CPU.
Often you will want to use a particular set of Mercury modules in more than one program. The Mercury implementation includes support for developing libraries, i.e. sets of Mercury modules intended for reuse. It allows separate compilation of libraries and, on many platforms, it supports shared object libraries.
A Mercury library is identified by a top-level module, which should contain all of the modules in that library as sub-modules. It may be as simple as this `mypackage.m' file:
:- module mypackage. :- interface. :- include_module foo, bar, baz.
This defines a module `mypackage' containing sub-modules `mypackage:foo', `mypackage:bar', and `mypackage:baz'.
It is also possible to build libraries of unrelated modules, so long as the top-level module imports all the necessary modules. For example:
:- module blah. :- import_module fee, fie, foe, fum.
This example defines a module `blah', which has no functionality of its own, and which is just used for grouping the unrelated modules `fee', `fie', `foe', and `fum'.
Generally it is better style for each library to consist of a single module which encapsulates its sub-modules, as in the first example, rather than just a group of unrelated modules, as in the second example.
Generally Mmake will do most of the work of building
libraries automatically. Here's a sample Mmakefile
for
creating a library.
MAIN_TARGET = libmypackage depend: mypackage.depend
The Mmake target `libfoo' is a built-in target for creating a library whose top-level module is `foo.m'. The automatically generated Mmake rules for the target `libfoo' will create all the files needed to use the library. (You will need to run `mmake foo.depend' first to generate the module dependency information.)
Mmake will create static (non-shared) object libraries and, on most platforms, shared object libraries; however, we do not yet support the creation of dynamic link libraries (DLLs) on Windows. Static libraries are created using the standard tools `ar' and `ranlib'. Shared libraries are created using the `--make-shared-lib' option to `ml'. The automatically-generated Make rules for `libmypackage' will look something like this:
libmypackage: libmypackage.a libmypackage.so \ $(mypackage.ints) $(mypackage.int3s) \ $(mypackage.opts) $(mypackage.trans_opts) mypackage.init libmypackage.a: $(mypackage.os) rm -f libmypackage.a $(AR) $(ARFLAGS) libmypackage.a $(mypackage.os) $(MLOBJS) $(RANLIB) $(RANLIBFLAGS) mypackage.a libmypackage.so: $(mypackage.pic_os) $(ML) $(MLFLAGS) --make-shared-lib -o libmypackage.so \ $(mypackage.pic_os) $(MLPICOBJS) $(MLLIBS) libmypackage.init: ... clean: rm -f libmypackage.a libmypackage.so
If necessary, you can override the default definitions of the variables such as `ML', `MLFLAGS', `MLPICOBJS', and `MLLIBS' to customize the way shared libraries are built. Similarly `AR', `ARFLAGS', `MLOBJS', `RANLIB', and `RANLIBFLAGS' control the way static libraries are built. (The `MLOBJS' variable is supposed to contain a list of additional object files to link into the library, while the `MLLIBS' variable should contain a list of `-l' options naming other libraries used by this library. `MLPICOBJS' is described below.)
Note that to use a library, as well as the shared or static object library, you also need the interface files. That's why the `libmypackage' target builds `$(mypackage.ints)' and `$(mypackage.int3s)'. If the people using the library are going to use intermodule optimization, you will also need the intermodule optimization interfaces. The `libmypackage' target will build `$(mypackage.opts)' if `--intermodule-optimization' is specified in your `MCFLAGS' variable (this is recommended). Similarly, if the people using the library are going to use transitive intermodule optimization, you will also need the transitive intermodule optimization interfaces (`$(mypackage.trans_opt)'). These will be built if `--trans-intermod-opt' is specified in your `MCFLAGS' variable.
In addition, with certain compilation grades, programs will need to execute some startup code to initialize the library; the `mypackage.init' file contains information about initialization code for the library. The `libmypackage' target will build this file.
On some platforms, shared objects must be created using position independent
code (PIC), which requires passing some special options to the C compiler.
On these platforms, Mmake
will create `.pic_o' files,
and `$(mypackage.pic_os)' will contain a list of the `.pic_o' files
for the library whose top-level module is `mypackage'.
In addition, `$(MLPICOBJS)' will be set to `$MLOBJS' with
all occurrences of `.o' replaced with `.pic_o'.
On other platforms, position independent code is the default,
so `$(mypackage.pic_os)' will just be the same as `$(mypackage.os)',
which contains a list of the `.o' files for that module,
and `$(MLPICOBJS)' will be the same as `$(MLOBJS)'.
If you want, once you have built a library, you could then install (i.e. copy) the shared object library, the static object library, the interface files (possibly including the optimization interface files and the transitive optimization interface files), and the initialization file into a different directory, or into several different directories, for that matter -- though it is probably easiest for the users of the library if you keep them in a single directory. Or alternatively, you could package them up into a `tar', `shar', or `zip' archive and ship them to the people who will use the library.
To use a library, you need to set the Mmake variables `VPATH', `MCFLAGS', `MLFLAGS', `MLLIBS', and `C2INITARGS' to specify the name and location of the library or libraries that you wish to use. If you are using `--intermodule-optimization', you may also need to set `MGNUCFLAGS' if the library uses the C interface. For example, if you want to link in the libraries `mypackage' and `myotherlib', which were built in the directories `/some/directory/mypackage' and `/some/directory/myotherlib' respectively, you could use the following settings:
# Specify the location of the `mypackage' and `myotherlib' directories MYPACKAGE_DIR = /some/directory/mypackage MYOTHERLIB_DIR = /some/directory/myotherlib # The following stuff tells Mmake to use the two libraries VPATH = $(MYPACKAGE_DIR):$(MYOTHERLIB_DIR):$(MMAKE_VPATH) MCFLAGS = -I$(MYPACKAGE_DIR) -I$(MYOTHERLIB_DIR) $(EXTRA_MCFLAGS) MLFLAGS = -R$(MYPACKAGE_DIR) -R$(MYOTHERLIB_DIR) $(EXTRA_MLFLAGS) \ -L$(MYPACKAGE_DIR) -L$(MYOTHERLIB_DIR) MLLIBS = -lmypackage -lmyotherlib $(EXTRA_MLLIBS) C2INITARGS = $(MYPACKAGE_DIR)/mypackage.init \ $(MYOTHERLIB_DIR)/myotherlib.init # This line may be needed if `--intermodule-optimization' # is in `MCFLAGS'. `-I' options should be added for any other # directories containing header files that the libraries require. MGNUCFLAGS = -I$(MYPACKAGE_DIR) -I$(MYOTHERLIB_DIR) $(EXTRA_MGNUCFLAGS)
Here `VPATH' is a colon-separated list of path names specifying directories that Mmake will search for interface files. The `-I' options in `MCFLAGS' tell `mmc' where to find the interface files. The `-R' options in `MLFLAGS' tell the loader where to find shared libraries, and the `-L' options tell the linker where to find libraries. (Note that the `-R' options must precede the `-L' options.) The `-l' options tell the linker which libraries to link with. The extra arguments to `c2init' specified in the `C2INITARGS' variable tell `c2init' where to find the `.init' files for the libraries (so that it can generate appropriate initialization code) as well as telling Mmake that any `_init.c' files generated depend on these files. The `-I' options in `MGNUCFLAGS' tell the C preprocessor where to find the header files for the libraries.
The example above assumes that the static object library, shared object library, interface files and initialization file for each Mercury library being used are all put in a single directory, which is probably the simplest way of organizing things, but the Mercury implementation does not require that.
In order to better support using and installing libraries in multiple grades, `mmake' now has support for alternative library directory hierarchies. These have the same structure as the `prefix/lib/mercury' tree, including the different subdirectories for different grades and different machine architectures.
In order to support the installation of a library into such a tree, you simply need to specify (e.g. in your `Mmakefile') the path prefix and the list of grades to install:
INSTALL_PREFIX = /my/install/dir LIBGRADES = asm_fast asm_fast.gc.tr.debug
This specifies that libraries should be installed in `/my/install/dir/lib/mercury', in the default grade plus `asm_fast' and `asm_fast.gc.tr.debug'. If `INSTALL_PREFIX' is not specified, `mmake' will attempt to install the library in the same place as the standard Mercury libraries. If `LIBGRADES' is not specified, `mmake' will use the Mercury compiler's default set of grades, which may or may not correspond to the actual set of grades in which the standard Mercury libraries were installed.
To actually install a library `libfoo', use the `mmake' target `libfoo.install'. This also installs all the needed interface files, and (if intermodule optimisation is enabled) the relevant intermodule optimisation files.
One can override the list of grades to install for a given library `libfoo' by setting the `LIBGRADES-foo' variable, or add to it by setting `EXTRA_LIBGRADES-foo'.
The command used to install each file is specified by `INSTALL'. If `INSTALL' is not specified, `cp' will be used.
The command used to create directories is specified by `INSTALL_MKDIR'. If `INSTALL_MKDIR' is not specified, `mkdir -p' will be used.
Note that currently it is not possible to set the installation prefix on a library-by-library basis.
Once a library is installed in such a hierarchy, using it is easy. Suppose the user wishes to use the library `mypackage' (installed in the tree rooted at `/some/directory/mypackage') and the library `myotherlib' (installed in the tree rooted at `/some/directory/myotherlib'). The user need only set the following Mmake variables:
EXTRA_LIB_DIRS = /some/directory/mypackage/lib/mercury \ /some/directory/myotherlib/lib/mercury EXTRA_LIBRARIES = mypackage myotherlib
Mmake will then ensure that the appropriate directories are searched for the relevant interface files, module initialisation files, compiled libraries, etc.
One can specify extra libraries to be used on a program-by-program basis. For instance, if the program `foo' also uses the library `mylib4foo', but the other programs governed by the Mmakefile don't, then one can declare:
EXTRA_LIBRARIES-foo = mylib4foo
This section gives a quick and simple guide to getting started with the debugger. The remainder of this chapter contains more detailed documentation.
To use the debugger, you must first compile your program with debugging enabled. You can do this by using the `--debug' option to `mmc', or by including `GRADEFLAGS = --debug' in your `Mmakefile'.
bash$ mmc --debug hello.m
Once you've compiled with debugging enabled, you can use the `mdb' command to invoke your program under the debugger:
bash$ mdb ./hello arg1 arg2 ...
Any arguments (such as `arg1 arg2 ...' in this example) that you pass after the program name will be given as arguments to the program.
The debugger will print a start-up message
and will then show you the first trace event,
namely the call to main/2
:
1: 1 1 CALL pred hello:main/2-0 (det) hello.m:13 mdb>
By hitting enter at the `mdb>' prompt, you can step through the execution of your program to the next trace event:
2: 2 2 CALL pred io:write_string/3-0 (det) io.m:2837 (hello.m:14) mdb> Hello, world 3: 2 2 EXIT pred io:write_string/3-0 (det) io.m:2837 (hello.m:14) mdb>
For each trace event, the debugger prints out several pieces of information. The three numbers at the start of the display are the event number, the call sequence number, and the call depth. (You don't really need to pay too much attention to those.) They are followed by the event type (e.g. `CALL' or `EXIT'). After that comes the identification of the procedure in which the event occurred, consisting of the module-qualified name of the predicate or function to which the procedure belongs, followed by its arity, mode number and determinism. This may sometimes be followed by a "path" (see section Tracing of Mercury programs). At the end is the file name and line number of the called procedure and (if available) also the file name and line number of the call.
The most useful mdb
commands have single-letter abbreviations.
The `alias' command will show these abbreviations:
mdb> alias ? => help EMPTY => step NUMBER => step P => print * b => break c => continue d => stack f => finish g => goto h => help p => print r => retry s => step v => vars
The `P' or `print *' command will display the values of any live variables in scope. The `f' or `finish' command can be used if you want to skip over a call. The `b' or `break' command can be used to set break-points. The `d' or `stack' command will display the call stack. The `quit' command will exit the debugger.
That should be enough to get you started. But if you have GNU Emacs installed, you should strongly consider using the Emacs interface to `mdb' -- see the following section.
For more information about the available commands, use the `?' or `help' command, or see section Debugger commands.
As well as the command-line debugger, mdb, there is also an Emacs interface to this debugger. Note that the Emacs interface only works with GNU Emacs, not with XEmacs.
With the Emacs interface, the debugger will display your source code as you trace through it, marking the line that is currently being executed, and allowing you to easily set breakpoints on particular lines in your source code. You can have separate windows for the debugger prompt, the source code being executed, and for the output of the program being executed. In addition, most of the mdb commands are accessible via menus.
To start the Emacs interface, you first need to put the following text in the file `.emacs' in your home directory, replacing "/usr/local/mercury-1.0" with the directory that your Mercury implementation was installed in.
(setq load-path (cons (expand-file-name "/usr/local/mercury-1.0/lib/mercury/elisp") load-path)) (autoload 'mdb "gud" "Invoke the Mercury debugger" t)
Build your program with debugging enabled, as described in section Quick overview or section Preparing a program for debugging. Then start up Emacs, e.g. using the command `emacs', and type M-x mdb RET. Emacs will then prompt you for the mdb command to invoke
Run mdb (like this): mdb
and you should type in the name of the program that you want to debug and any arguments that you want to pass to it:
Run mdb (like this): mdb ./hello arg1 arg2 ...
Emacs will then create several "buffers": one for the debugger prompt, one for the input and output of the program being executed, and one or more for the source files. By default, Emacs will split the display into two parts, called "windows", so that two of these buffers will be visible. You can use the command C-x o to switch between windows, and you can use the command C-x 2 to split a window into two windows. You can use the "Buffers" menu to select which buffer is displayed in each window.
If you're using X-Windows, then it is a good idea to set the Emacs variable `pop-up-frames' to `t' before starting mdb, since this will cause each buffer to be displayed in a new "frame" (i.e. a new X window). You can set this variable interactively using the `set-variable' command, i.e. M-x set-variable RET pop-up-frames RET t RET. Or you can put `(setq pop-up-frames t)' in the `.emacs' file in your home directory.
For more information on buffers, windows, and frames, see the Emacs documentation.
Another useful Emacs variable is `gud-mdb-directories'. This specifies the list of directories to search for source files. You can use a command such as
M-x set-variable RET gud-mdb-directories RET (list "/foo/bar" "../other" "/home/guest") RET
to set it interactively, or you can put a command like
(setq gud-mdb-directories (list "/foo/bar" "../other" "/home/guest"))
in your `.emacs' file.
At each trace event, the debugger will search for the source file corresponding to that event, first in the same directory as the program, and then in the directories specified by the `gud-mdb-directories' variable. It will display the source file, with the line number corresponding to that trace event marked by an arrow (`=>') at the start of the line.
Several of the debugger features can be accessed by moving the cursor to the relevant part of the source code and then selecting a command from the menu. You can set a break point on a line by moving the cursor to the appropriate line in your source code (e.g. with the arrow keys, or by clicking the mouse there), and then selecting the "Set breakpoint on line" command from the "Breakpoints" sub-menu of the "MDB" menu. You can set a breakpoint on a procedure by moving the cursor over the procedure name and then selecting the "Set breakpoint on procedure" command from the same menu. And you can display the value of a variable by moving the cursor over the variable name and then selecting the "Print variable" command from the "Data browsing" sub-menu of the "MDB" menu. Most of the menu commands also have keyboard short-cuts, which are displayed on the menu.
Note that mdb's `context' command should not be used if you are using the Emacs interface, otherwise the Emacs interface won't be able to parse the file names and line numbers that mdb outputs, and so it won't be able to highlight the correct location in the source code.
The Mercury debugger is based on a modified version of the box model on which the four-port debuggers of most Prolog systems are based. Such debuggers abstract the execution of a program into a sequence, also called a trace, of execution events of various kinds. The four kinds of events supported by most Prolog systems (their ports) are
Mercury also supports these four kinds of events, but not all events can occur for every procedure call. Which events can occur for a procedure call, and in what order, depend on the determinism of the procedure. The possible event sequences for procedures of the various determinisms are as follows.
In addition to these four event types, Mercury supports exception events. An exception event occurs when an exception has been thrown inside a procedure, and control is about to propagate this exception to the caller. An exception event can replace the final exit or fail event in the event sequences above or, in the case of erroneous procedures, can come after the call event.
Besides the event types call, exit, redo, fail and exception, which describe the interface of a call, Mercury also supports several types of events that report on what is happening internal to a call. Each of these internal event types has an associated parameter called a path. The internal event types are:
A path is a sequence of path components separated by semicolons. Each path component is one of the following:
cnum
dnum
snum
?
t
e
~
q
A path describes the position of a goal inside the body of a procedure definition. For example, if the procedure body is a disjunction in which each disjunct is a conjunction, then the path `d2;c3;' denotes the third conjunct within the second disjunct. If the third conjunct within the second disjunct is an atomic goal such as a call or a unification, then this will be the only goal with whose path has `d2;c3;' as a prefix. If it is a compound goal, then its components will all have paths that have `d2;c3;' as a prefix, e.g. if it is an if-then-else, then its three components will have the paths `d2;c3;?;', `d2;c3;t;' and `d2;c3;e;'.
Paths refer to the internal form of the procedure definition. When debugging is enabled (and the option --trace-optimized is not given), the compiler will try to keep this form as close as possible to the source form of the procedure, in order to make event paths as useful as possible to the programmer. Due to the compiler's flattening of terms, and its introduction of extra unifications to implement calls in implied modes, the number of conjuncts in a conjunction will frequently differ between the source and internal form of a procedure. This is rarely a problem, however, as long as you know about it. Mode reordering can be a bit more of a problem, but it can be avoided by writing single-mode predicates and functions so that producers come before consumers. The compiler transformation that potentially causes the most trouble in the interpretation of goal paths is the conversion of disjunctions into switches. In most cases, a disjunction is transformed into a single switch, and it is usually easy to guess, just from the events within a switch arm, just which disjunct the switch arm corresponds to. Some cases are more complex; for example, it is possible for a single disjunction can be transformed into several switches, possibly with other, smaller disjunctions inside them. In such cases, making sense of goal paths may require a look at the internal form of the procedure. You can ask the compiler to generate a file with the internal forms of the procedures in a given module by including the options `-dfinal -Dpaths' on the command line when compiling that module.
When you compile a Mercury program, you can specify whether you want to be able to run the Mercury debugger on the program or not. If you do, the compiler embeds calls to the Mercury debugging system into the executable code of the program, at the execution points that represent trace events. At each event, the debugging system decides whether to give control back to the executable immediately, or whether to first give control to you, allowing you to examine the state of the computation and issue commands.
Mercury supports two broad ways of preparing a program for debugging.
The simpler way is to compile a program in a debugging grade,
which you can do directly by specifying a grade
that includes the word "debug" (e.g. `asm_fast.gc.debug'),
or indirectly by specifying the `--debug' grade option
to the compiler, linker, and other tools
(in particular mmc
, mgnuc
, ml
, and c2init
).
If you follow this way,
and accept the default settings of the various compiler options
that control the selection of trace events (which are described below),
you will be assured of being able to get control
at every execution point that represents a potential trace event,
which is very convenient.
The two drawbacks of using a debugging grade are the large size of the resulting executables, and the fact that often you discover that you need to debug a big program only after having built it in a non-debugging grade. This is why Mercury also supports another way to prepare a program for debugging, one that does not require the use of a debugging grade. With this way, you can decide, individually for each module, which of three trace levels, `none', `shallow' and `deep', you want to compile them with:
The intended uses of these trace levels are as follows.
In general, it is a good idea for most or all modules that can be called from modules compiled with trace level `deep' to be compiled with at least trace level `shallow'.
You can control what trace level a module is compiled with by giving one of the following compiler options:
As the name implies, the fourth alternative is the default, which is why by default you get no debugging capability in non-debugging grades and full debugging capability in debugging grades. The table also shows that in a debugging grade, no module can be compiled with trace level `none'.
Important note: If you are not using a debugging grade, but you compile some modules with `--trace shallow' or `--trace deep', then you must also pass the `--trace' (or `-t') option to c2init and to the Mercury linker. If you're using Mmake, then you can do this by including `--trace' in the `C2INITFLAGS' and `MLFLAGS' variables.
If you're using Mmake, then you can also set the compilation options for a single module named Module by setting the Mmake variable `MCFLAGS-Module'. For example, to compile the file `foo.m' with deep tracing, `bar.m' with shallow tracing, and everything else with no tracing, you could use the following:
C2INITFLAGS = --trace MLFLAGS = --trace MCFLAGS-foo = --trace deep MCFLAGS-bar = --trace shallow
By default, all trace levels other than `none' turn off all compiler optimizations that can affect the sequence of trace events generated by the program, such as inlining. If you are specifically interested in how the compiler's optimizations affect the trace event sequence, you can specify the option `--trace-optimized', which tells the compiler that it does not have to disable those optimizations. (A small number of low-level optimizations have not yet been enhanced to work properly in the presence of tracing, so compiler disables these even if `--trace-optimized' is given.)
The executables of Mercury programs by default do not invoke the Mercury debugger even if some or all of their modules were compiled with some form of tracing, and even if the grade of the executable is a debugging grade, This is similar to the behaviour of executables created by the implementations of other languages; for example the executable of a C program compiled with `-g' does not automatically invoke gdb or dbx etc when it is executed.
Unlike those other language implementations, when you invoke the Mercury debugger `mdb', you invoke it not just with the name of an executable but with the command line you want to debug. If something goes wrong when you execute the command
prog arg1 arg2 ...
and you want to find the cause of the problem, you must execute the command
mdb prog arg1 arg2 ...
because you do not get a chance to specify the command line of the program later.
When the debugger starts up, as part of its initialization it executes commands from the following three sources, in order:
The operation of the Mercury debugger `mdb' is based on the following concepts.
The effect of a break point depends on the state of the break point.
Neither of these will happen if the break point is disabled.
Regardless of the print level, the debugger will print any event that causes execution to stop and user interaction to start.
When the debugger (as opposed to the program being debugged) is interacting with the user, the debugger prints a prompt and reads in a line of text, which it will interpret as its next command. Each command line consists of several words separated by white space. The first word is the name of the command, while any other words give options and/or parameters to the command.
Some commands take a number as their first parameter. For such commands, users can type `number command' as well as `command number'. The debugger will treat the former as the latter, even if the number and the command are not separated by white space.
query module1 module2 ...
cc_query module1 module2 ...
io_query module1 module2 ...
The module names module1, module2, ... specify which modules will be imported. Note that you can also add new modules to the list of imports directly at the query prompt, by using a command of the form `[module]', e.g. `[int]'. You need to import all the modules that define symbols used in your query. Queries can only use symbols that are exported from a module; entities which are declared in a module's implementation section only cannot be used.
The three variants differ in what kind of goals they allow. For goals which perform I/O, you need to use `io_query'; this lets you type in the goal using DCG syntax. For goals which don't do I/O, but which have determinism `cc_nondet' or `cc_multi', you need to use `cc_query'; this finds only one solution to the specified goal. For all other goals, you can use plain `query', which finds all the solutions to the goal.
For `query' and `cc_query', the debugger will print out all the variables in the goal using `io__write'. The goal must bind all of its variables to ground terms, otherwise you will get a mode error.
The current implementation works by compiling the queries on-the-fly and then dynamically linking them into the program being debugged. Thus it may take a little while for your query to be executed. Each query will be written to a file named `query.m' in the current directory, so make sure you don't name your source file `query.m'. Note that dynamic linking may not be supported on some systems; if you are using a system for which dynamic linking is not supported, you will get an error message when you try to run these commands.
You may also need to build your program using shared libraries for interactive queries to work. With Linux on the Intel x86 architecture, the default is for executables to be statically linked, which means that dynamic linking won't work, and hence interactive queries won't work either (the error message is rather obscure: the dynamic linker complains about the symbol `__data_start' being undefined). To build with shared libraries, you can use `MGNUCFLAGS=--pic-reg' and `MLFLAGS=--shared' in your Mmakefile. See the `README.Linux' file in the Mercury distribution for more details.
step [-NSans] [num]
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is not strict, and it uses the default print level.
A command line containing only a number num is interpreted as if it were `step num'.
An empty command line is interpreted as `step 1'.
goto [-NSans] num
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
next [-NSans] [num]
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
finish [-NSans] [num]
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
exception [-NSans]
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
return [-NSans]
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
forward [-NSans]
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
mindepth [-NSans] depth
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
maxdepth [-NSans] depth
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is strict, and it uses the default print level.
continue [-NSans]
The options `-n' or `--none', `-s' or `--some', `-a' or `--all' specify the print level to use for the duration of the command, while the options `-S' or `--strict' and `-N' or `--nostrict' specify the strictness of the command.
By default, this command is not strict. The print level used by the command by default depends on the final strictness level: if the command is strict, it is `none', otherwise it is `some'.
retry
The command will report an error unless the values of all the input arguments are available at the current port. (The compiler will keep the values of the input arguments of traced predicates as long as possible, but it cannot keep them beyond the point where they are destructively updated.)
The debugger can perform a retry only from an exit or fail port; only at these ports does the debugger have enough information to figure out how to reset the stacks. If the debugger is not at such a port when a retry command is given, the debugger will continue forward execution until it reaches an exit or fail port of the call to be retried before it performs the retry. This may require a noticeable amount of time.
retry num
vars
print [-fpv] name
print [-fpv] num
The options `-f' or `--flat', `-p' or `--pretty', and `-v' or `--verbose' specify the format to use for printing.
print [-fpv] *
The options `-f' or `--flat', `-p' or `--pretty', and `-v' or `--verbose' specify the format to use for printing.
print [-fpv] exception
The options `-f' or `--flat', `-p' or `--pretty', and `-v' or `--verbose' specify the format to use for printing.
browse [-fpv] name
browse [-fpv] num
The interactive term browser allows you to selectively examine particular subterms. The depth and size of printed terms may be controlled. The displayed terms may also be clipped to fit within a single screen.
The options `-f' or `--flat', `-p' or `--pretty', and `-v' or `--verbose' specify the format to use for browsing.
For further documentation on the interactive term browser, invoke the `browse' command from within `mdb' and then type `help' at the `browser>' prompt.
browse [-fpv] exception
The options `-f' or `--flat', `-p' or `--pretty', and `-v' or `--verbose' specify the format to use for browsing.
stack [-d]
The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
This command will report an error if there is no stack trace information available about any ancestor.
up [-d] [num]
If num is not specified, the default value is one.
This command will report an error if the current environment doesn't have the required number of ancestors, or if there is no execution trace information about the requested ancestor, or if there is no stack trace information about any of the ancestors between the current environment and the requested ancestor.
The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
down [-d] [num]
If num is not specified, the default value is one.
This command will report an error if there is no execution trace information about the requested descendant.
The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
level [-d] [num]
This command will report an error if the current environment doesn't have the required number of ancestors, or if there is no execution trace information about the requested ancestor, or if there is no stack trace information about any of the ancestors between the current environment and the requested ancestor.
The option `-d' or `--detailed' specifies that for each ancestor call, the call's event number, sequence number and depth should also be printed if the call is to a procedure that is being execution traced.
current
set [-APBfpv] param value
The browser maintains separate configuration parameters for the three commands `print *', `print var', and `browse var'. A single `set' command can modify the parameters for more than one of these; the options `-A' or `--print-all', `-P' or `--print', and `-B' or `--browse' select which commands will be affected by the change. If none of these options is given, the default is to affect all commands.
The browser also maintains separate configuration parameters for the three different output formats. This applies to all parameters except for the format itself. The options `-f' or `--flat', `-p' or `--pretty', and `-v' or `--verbose' select which formats will be affected by the change. If none of these options is given, the default is to affect all formats. In the case that the format itself is being set, these options are ignored.
break [-PS] filename:linenumber
The options `-P' or `--print', and `-S' or `--stop' specify the action to be taken at the break point.
By default, the initial state of the break point is `stop'.
break [-AOPSaei] proc-spec
The options `-A' or `--select-all', and `-O' or `--select-one' select the action to be taken if the specification matches more than one procedure. If you have specified option `-A' or `--select-all', mdb will put a breakpoint on all matched procedures, whereas if you have specified option `-O' or `--select-one', mdb will report an error. By default, mdb will ask you whether you want to put a breakpoint on all matched procedures or just one, and if so, which one.
The options `-P' or `--print', and `-S' or `--stop' specify the action to be taken at the break point, while the options `-a' or `--all', `-e' or `--entry', and `-i' or `--interface' specify the invocation conditions of the break point.
By default, the action of the break point is `stop', and its invocation condition is `interface'.
break [-PS] here
The options `-P' or `--print', and `-S' or `--stop' specify the action to be taken at the break point.
By default, the initial state of the break point is `stop'.
break info
disable num
disable *
disable
enable num
enable *
enable
delete num
delete *
delete
modules
procedures module
register
mmc_options option1 option2 ...
printlevel none
printlevel some
printlevel all
printlevel
echo on
echo off
echo
scroll on
scroll off
scroll size
scroll
context none
context before
context after
context prevline
context nextline
context
alias name command [command-parameter ...]
If name is the upper-case word `EMPTY', the debugger will substitute the given command and parameters whenever the user types in an empty command line.
If name is the upper-case word `NUMBER', the debugger will insert the given command and parameters before the command line whenever the user types in a command line that consists of a single number.
unalias name
document_category slot category
document category slot item
help category item
help word
help
histogram_all filename
histogram_exp filename
clear_histogram
The following commands are intended for use by the developers of the Mercury implementation.
nondet_stack
stack_regs
all_regs
table_io
table_io stats
table_io start
table_io end
proc_stats
proc_stats filename
label_stats
label_stats filename
source [-i] filename
The option `-i' or `--ignore-errors' tells `mdb' not to complain if the named file does not exist or is not readable.
save filename
quit [-y]
End-of-file on the debugger's input is considered a quit command.
The Mercury compiler allows compilation of predicates for execution using the Aditi2 deductive database system. There are several sources of useful information:
As an alternative to compiling stand-alone programs, you can execute queries using the Aditi query shell.
The Aditi interface library is installed as part of the Aditi installation process. To use the Aditi library in your programs, use the Mmakefile in `$ADITI_HOME/demos/transactions' as a template.
The Mercury profiler `mprof' is a tool which can be used to analyze a Mercury program's performance, so that the programmer can determine which predicates or functions are taking up a disproportionate amount of the execution time.
To obtain the best trade-off between productivity and efficiency, programmers should not spend too much time optimizing their code until they know which parts of the code are really taking up most of the time. Only once the code has been profiled should the programmer consider making optimizations that would improve efficiency at the expense of readability or ease of maintenance.
A good profiler is a tool that should be part of every software engineer's toolkit.
To enable profiling, your program must be built with profiling enabled. This can be done by passing the `-p' (`--profiling') option to `mmc' (and also to `mgnuc' and `ml', if you invoke them separately). If you are using Mmake, then you can do this by setting the `GRADEFLAGS' variable in your Mmakefile, e.g. by adding the line `GRADEFLAGS=--profiling'. For more information about the different grades, see section Compilation model options.
Enabling profiling has several effects. Firstly, it causes the compiler to generate slightly modified code which counts the number of times each predicate or function is called, and for every call, records the caller and callee. Secondly, your program will be linked with versions of the library and runtime that were compiled with profiling enabled. (It also has the effect for each source file the compiler generates the static call graph for that file in `module.prof'.)
You can control whether profiling measures real (elapsed) time, user time plus system time, or user time only, by including the options `-Tr', `-Tp', or `-Tv' respectively in the environment variable MERCURY_OPTIONS when you run the program to be profiled. Currently, the `-Tp' and `-Tv' options don't work on Windows, so on Windows you must explicitly specify `-Tr'.
The default is user time plus system time, which counts all time spent executing the process, including time spent by the operating system performing working on behalf of the process, but not including time that the process was suspended (e.g. due to time slicing, or while waiting for input). When measuring real time, profiling counts even periods during which the process was suspended. When measuring user time only, profiling does not count time inside the operating system at all.
The next step is to run your program. The profiling version of your program will collect profiling information during execution, and save this information in the files `Prof.Counts', `Prof.Decls', and `Prof.CallPair'. (`Prof.Decl' contains the names of the procedures and their associated addresses, `Prof.CallPair' records the number of times each procedure was called by each different caller, and `Prof.Counts' records the number of times that execution was in each procedure when a profiling interrupt occurred.)
It is also possible to combine profiling results from multiple runs of your program. You can do by running your program several times, and typing `mprof_merge_counts' after each run.
Due to a known timing-related bug in our code, you may occasionally get segmentation violations when running your program with time profiling enabled. If this happens, just run it again -- the problem occurs only very rarely.
To display the profile, just type `mprof'. This will read the `Prof.*' files and display the flat profile in a nice human-readable format. If you also want to see the call graph profile, which takes a lot longer to generate, type `mprof -c'.
Note that `mprof' can take quite a while to execute, and will usually produce quite a lot of output, so you will usually want to redirect the output into a file with a command such as `mprof > mprof.out'.
For programs built with `--high-level-code', you need to also pass the `--no-demangle' option to `mprof'.
The profile output consists of three major sections. These are named the call graph profile, the flat profile and the alphabetic listing.
The call graph profile presents the local call graph of each procedure. For each procedure it shows the parents (callers) and children (callees) of that procedure, and shows the execution time and call counts for each parent and child. It is sorted on the total amount of time spent in the procedure and all of its descendents (i.e. all of the procedures that it calls, directly or indirectly.)
The flat profile presents the just execution time spent in each procedure. It does not count the time spent in descendents of a procedure.
The alphabetic listing just lists the procedures in alphabetical order, along with their index number in the call graph profile, so that you can quickly find the entry for a particular procedure in the call graph profile.
The profiler works by interrupting the program at frequent intervals, and each time recording the currently active procedure and its caller. It uses these counts to determine the proportion of the total time spent in each procedure. This means that the figures calculated for these times are only a statistical approximation to the real values, and so they should be treated with some caution.
The time spent in a procedure and its descendents is calculated by propagating the times up the call graph, assuming that each call to a procedure from a particular caller takes the same amount of time. This assumption is usually reasonable, but again the results should be treated with caution.
Note that any time spent in a C function (e.g. time spent in `GC_malloc()', which does memory allocation and garbage collection) is credited to the Mercury procedure that called that C function.
Here is a small portion of the call graph profile from an example program.
called/total parents index %time self descendents called+self name index called/total children <spontaneous> [1] 100.0 0.00 0.75 0 call_engine_label [1] 0.00 0.75 1/1 do_interpreter [3] ----------------------------------------------- 0.00 0.75 1/1 do_interpreter [3] [2] 100.0 0.00 0.75 1 io__run/0(0) [2] 0.00 0.00 1/1 io__init_state/2(0) [11] 0.00 0.74 1/1 main/2(0) [4] ----------------------------------------------- 0.00 0.75 1/1 call_engine_label [1] [3] 100.0 0.00 0.75 1 do_interpreter [3] 0.00 0.75 1/1 io__run/0(0) [2] ----------------------------------------------- 0.00 0.74 1/1 io__run/0(0) [2] [4] 99.9 0.00 0.74 1 main/2(0) [4] 0.00 0.74 1/1 sort/2(0) [5] 0.00 0.00 1/1 print_list/3(0) [16] 0.00 0.00 1/10 io__write_string/3(0) [18] ----------------------------------------------- 0.00 0.74 1/1 main/2(0) [4] [5] 99.9 0.00 0.74 1 sort/2(0) [5] 0.05 0.65 1/1 list__perm/2(0) [6] 0.00 0.09 40320/40320 sorted/1(0) [10] ----------------------------------------------- 8 list__perm/2(0) [6] 0.05 0.65 1/1 sort/2(0) [5] [6] 86.6 0.05 0.65 1+8 list__perm/2(0) [6] 0.00 0.60 5914/5914 list__insert/3(2) [7] 8 list__perm/2(0) [6] ----------------------------------------------- 0.00 0.60 5914/5914 list__perm/2(0) [6] [7] 80.0 0.00 0.60 5914 list__insert/3(2) [7] 0.60 0.60 5914/5914 list__delete/3(3) [8] ----------------------------------------------- 40319 list__delete/3(3) [8] 0.60 0.60 5914/5914 list__insert/3(2) [7] [8] 80.0 0.60 0.60 5914+40319 list__delete/3(3) [8] 40319 list__delete/3(3) [8] ----------------------------------------------- 0.00 0.00 3/69283 tree234__set/4(0) [15] 0.09 0.09 69280/69283 sorted/1(0) [10] [9] 13.3 0.10 0.10 69283 compare/3(0) [9] 0.00 0.00 3/3 __Compare___io__stream/0(0) [20] 0.00 0.00 69280/69280 builtin_compare_int/3(0) [27] ----------------------------------------------- 0.00 0.09 40320/40320 sort/2(0) [5] [10] 13.3 0.00 0.09 40320 sorted/1(0) [10] 0.09 0.09 69280/69283 compare/3(0) [9] -----------------------------------------------
The first entry is `call_engine_label' and its parent is `<spontaneous>', meaning that it is the root of the call graph. (The first three entries, `call_engine_label', `do_interpreter', and `io__run/0' are all part of the Mercury runtime; `main/2' is the entry point to the user's program.)
Each entry of the call graph profile consists of three sections, the parent procedures, the current procedure and the children procedures.
Reading across from the left, for the current procedure the fields are:
The predicate or function names are not just followed by their arity but also by their mode in brackets. A mode of zero corresponds to the first mode declaration of that predicate in the source code. For example, `list__delete/3(3)' corresponds to the `(out, out, in)' mode of `list__delete/3'.
Now for the parent and child procedures the self and descendent time have slightly different meanings. For the parent procedures the self and descendent time represent the proportion of the current procedure's self and descendent time due to that parent. These times are obtained using the assumption that each call contributes equally to the total time of the current procedure.
It is also possible to profile memory allocation. To enable memory profiling, your program must be built with memory profiling enabled, using the `--memory-profiling' option. Then, as with time profiling, you run your program to create the profiling data. This will be stored in the files `Prof.MemoryWords' `Prof.MemoryCells', `Prof.Decls', and `Prof.CallPair'.
To create the profile, you need to invoke `mprof' with the `-m' (`--profile memory-words') option. This will profile the amount of memory allocated, measured in units of words. (A word is 4 bytes on a 32-bit architecture, and 8 bytes on a 64-bit architecture.)
Alternatively, you can use `mprof''s `-M' (`--profile memory-cells') option. This will profile memory in units of "cells". A cell is a group of words allocated together in a single allocation, to hold a single object. Selecting this option this will therefore profile the number of memory allocations, while ignoring the size of each memory allocation.
With memory profiling, just as with time profiling, you can use the `-c' (`--call-graph') option to display call graph profiles in addition to flat profiles.
Note that Mercury's memory profiler will only tell you about allocation, not about deallocation (garbage collection). It can tell you how much memory was allocated by each procedure, but it won't tell you how long the memory was live for, or how much of that memory was garbage-collected.
This section contains a brief description of all the options available for `mmc', the Mercury compiler. Sometimes this list is a little out-of-date; use `mmc --help' to get the most up-to-date list.
mmc
is invoked as
mmc [options] arguments
Arguments can be either module names or file names. Arguments ending in `.m' are assumed to be file names, while other arguments are assumed to be module names, with `.' (rather than `__' or `:') as module qualifier. If you specify a module name such as `foo.bar.baz', the compiler will look for the source in files `foo.bar.baz.m', `bar.baz.m', and `baz.m', in that order.
Options are either short (single-letter) options preceded by a single `-', or long options preceded by `--'. Options are case-sensitive. We call options that do not take arguments flags. Single-letter flags may be grouped with a single `-', e.g. `-vVc'. Single-letter flags may be negated by appending another trailing `-', e.g. `-v-'. Long flags may be negated by preceding them with `no-', e.g. `--no-verbose'.
-w
--inhibit-warnings
--halt-at-warn
--halt-at-syntax-error
--inhibit-accumulator-warnings
--no-warn-singleton-variables
--no-warn-missing-det-decls
--no-warn-det-decls-too-lax
--no-warn-nothing-exported
--warn-unused-args
--warn-interface-imports
--warn-missing-opt-files
--warn-missing-trans-opt-files
--warn-non-stratification
--no-warn-simple-code
--warn-duplicate-calls
--no-warn-missing-module-name
--no-warn-wrong-module-name
-v
--verbose
-V
--very-verbose
-E
--verbose-error-messages
-S
--statistics
-T
--debug-types
-N
--debug-modes
--debug-det, --debug-determinism
--debug-opt
--debug-vn <n>
--debug-pd
--debug-rl-gen
--debug-rl-opt
-M
--generate-dependencies
--generate-module-order
-i
--make-int
--make-interface
--make-short-int
--make-short-interface
--make-priv-int
--make-private-interface
--make-opt-int
--make-optimization-interface
--make-trans-opt
--make-transitive-optimization-interface
-G
--convert-to-goedel
-P
--pretty-print
--convert-to-mercury
--typecheck-only
-e
--errorcheck-only
-C
--target-code-only
-c
--compile-only
--aditi-only
--output-grade-string
--no-assume-gmake
--trace-level level
--trace-optimized
--no-delay-death
--stack-trace-higher-order
--generate-bytecode
--auto-comments
-n-, --no-line-numbers
--show-dependency-graph
-d stage
--dump-hlds stage
--dump-hlds-options options
--dump-rl
--dump-rl-bytecode
--generate-schemas
See the Mercury language reference manual for detailed explanations of these options.
--no-reorder-conj
--no-reorder-disj
--fully-strict
error/1
.
--infer-all
--infer-types
--infer-modes
--no-infer-det, --no-infer-determinism
--type-inference-iteration-limit n
--mode-inference-iteration-limit n
For detailed explanations, see the "Termination analysis" section of the "Implementation-dependent extensions" chapter in the Mercury Language Reference Manual.
--enable-term
--enable-termination
--chk-term
--check-term
--check-termination
--verb-chk-term
--verb-check-term
--verbose-check-termination
--term-single-arg limit
--termination-single-argument-analysis limit
--termination-norm norm
--term-err-limit limit
--termination-error-limit limit
--term-path-limit limit
--termination-path-limit limit
The following compilation options affect the generated code in such a way that the entire program must be compiled with the same setting of these options, and it must be linked to a version of the Mercury library which has been compiled with the same setting. (Attempting to link object files compiled with different settings of these options will generally result in an error at link time, typically of the form `undefined symbol MR_grade_...' or `symbol MR_runtime_grade multiply defined'.)
The options below must be passed to `mgnuc', `c2init' and `ml' as well as to `mmc'. If you are using Mmake, then you should specify these options in the `GRADEFLAGS' variable rather than specifying them in `MCFLAGS', `MGNUCFLAGS', `C2INITFLAGS' and `MLFLAGS'.
-s grade
--grade grade
--target c --no-gcc-global-registers --no-gcc-nonlocal_gotos --no-asm-labels
.
--target c --gcc-global-registers --no-gcc-nonlocal_gotos --no-asm-labels
.
--target c --no-gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels
.
--target c --gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels
.
--target c --no-gcc-global-registers --gcc-nonlocal-gotos --asm-labels
.
--target c --gcc-global-registers --gcc-nonlocal_gotos --asm-labels
.
--target c --high-level-code
.
--target il --high-level-code
.
--target java --high-level-code
.
--gc conservative
.
--gc accurate
.
--profiling
.
--memory-profiling
.
--use-trail
.
--debug
.
--target c
(grades: none, reg, jump, fast, asm_jump, asm_fast, hlc)
--target asm
(grades: hlc)
--il
, --target il
(grades: ilc)
--java
, --target java
(grades: java)
--il-only
--compile-to-c
--compile-to-C
--java-only
--gcc-global-registers
(grades: reg, fast, asm_fast)
--no-gcc-global-registers
(grades: none, jump, asm_jump)
--gcc-non-local-gotos
(grades: jump, fast, asm_jump, asm_fast)
--no-gcc-non-local-gotos
(grades: none, reg)
--asm-labels
(grades: asm_jump, asm_fast)
--no-asm-labels
(grades: none, reg, jump, fast)
-H
, --high-level-code
(grades: hlc, ilc, java)
--gc {none, conservative, accurate}
--garbage-collection {none, conservative, accurate}
--profiling
, --time-profiling
(grades: any grade containing `.prof')
--memory-profiling
(grades: any grade containing `.memprof')
--debug
(grades: any grade containing `.debug')
--use-trail
(grades: any grade containing `.tr')
--pic-reg
(grades: any grade containing `.pic_reg')
Of the options listed below, the `--num-tag-bits' option may be useful for cross-compilation, but apart from that these options are all experimental and are intended for use by developers of the Mercury implementation rather than by ordinary Mercury programmers.
--tags {none, low, high}
--num-tag-bits n
--use-foreign-language foreign language
--no-type-layout
--low-level-debug
--target-debug
--no-trad-passes
--no-reclaim-heap-on-nondet-failure
--no-reclaim-heap-on-semidet-failure
--no-reclaim-heap-on-failure
--cc compiler-name
--c-include-directory dir
--cflags options
--c-debug
--javac compiler-name
--java-compiler compiler-name
--java-flags options
--java-classpath dir
--java-object-file-extension extension
--fact-table-max-array-size size
--fact-table-hash-percent-full percentage
The following options allow the Mercury compiler to optimize the generated C code based on the characteristics of the expected target architecture. The default values of these options will be whatever is appropriate for the host architecture that the Mercury compiler was installed on, so normally there is no need to set these options manually. They might come in handy if you are cross-compiling. But even when cross-compiling, it's probably not worth bothering to set these unless efficiency is absolutely paramount.
--have-delay-slot
--num-real-r-regs n
--num-real-f-regs n
--num-real-r-temps n
--num-real-f-temps n
-O n
--opt-level n
--optimization-level n
--opt-space
--optimize-space
--intermodule-optimization
--trans-intermod-opt
--transitive-intermodule-optimization
--use-opt-files
--use-trans-opt-files
--split-c-files
These optimizations are high-level transformations on our HLDS (high-level data structure).
--no-inlining
--no-inline-simple
--no-inline-single-use
--inline-compound-threshold threshold
--inline-simple-threshold threshold
--intermod-inline-simple-threshold threshold
--no-common-struct
--no-common-goal
--no-follow-code
--optimize-unused-args
--intermod-unused-args
--unneeded-code
--unneeded-code-copy-limit
--optimize-higher-order
--type-specialization
--user-guided-type-specialization
--higher-order-size-limit
--optimize-constant-propagation
--introduce-accumulators
--optimize-constructor-last-call
--optimize-dead-procs
--excess-assign
--optimize-duplicate-calls
--optimize-saved-vars
--deforestation
These optimizations are applied to the medium level intermediate code.
--no-mlds-optimize
--no-optimize-tailcalls
--no-optimize-initializations
These optimizations are applied during the process of generating low-level intermediate code from our high-level data structure.
--no-static-ground-terms
--no-smart-indexing
--dense-switch-req-density percentage
--dense-switch-size size
--lookup-switch-req-density percentage
--lookup-switch-size size
--string-switch-size size
--tag-switch-size size
--try-switch-size size
--binary-switch-size size
--no-middle-rec
--no-simple-neg
--no-follow-vars
These optimizations are transformations that are applied to our low-level intermediate code before emitting C code.
--no-common-data
--no-llds-optimize
--no-optimize-peep
--no-optimize-jumps
--no-optimize-fulljumps
--checked-nondet-tailcalls
--no-optimize-labels
--optimize-dups
--optimize-value-number
--pred-value-number
--no-optimize-frames
--no-optimize-delay-slot
--optimize-repeat n
--optimize-vnrepeat n
These optimizations are applied during the process of generating C intermediate code from our low-level data structure.
--no-emit-c-loops
--use-macro-for-redo-fail
--procs-per-c-function n
These optimizations are applied during the process of compiling the generated C code to machine code object files.
If you are using Mmake, you need to pass these options to `mgnuc' rather than to `mmc'.
--no-c-optimize
--inline-alloc
--optimize-rl
--optimize-rl-cse
--optimize-rl-invariants
--optimize-rl-index
--detect-rl-streams
-I dir
--search-directory dir
--intermod-directory dir
--use-search-directories-for-intermod
--use-subdirs
-?
-h
--help
--filenames-from-stdin
--aditi
--aditi-user
-o filename
--output-file filename
--link-flags options
-L directory
--library-directory directory
-l library
--library library
--link-object object
The shell scripts in the Mercury compilation environment will use the following environment variables if they are set. There should be little need to use these, because the default values will generally work fine.
MERCURY_DEFAULT_GRADE
MERCURY_C_INCL_DIR
MERCURY_ALL_C_INCL_DIRS
MERCURY_ALL_MC_C_INCL_DIRS
, which is used by `mmc'.
MERCURY_ALL_MC_C_INCL_DIRS
MERCURY_ALL_C_INCL_DIRS
, which is used by `mgnuc'.
MERCURY_INT_DIR
MERCURY_NC_BUILTIN
MERCURY_C_LIB_DIR
$MERCURY_C_LIB_DIR
.
MERCURY_NONSHARED_LIB_DIR
MERCURY_MOD_LIB_DIR
MERCURY_MOD_LIB_MODS
MERCURY_COMPILER
MERCURY_INTERPRETER
MERCURY_MKINIT
MERCURY_DEBUGGER_INIT
MERCURY_OPTIONS
-C size
-D debugger
-p
-P num
-T time-method
`r'
`p'
`v'
Currently, the `-Tp' and `-Tv' options don't work on Windows, so on Windows you must explicitly specify `-Tr'.
--heap-size size
--detstack-size size
--nondetstack-size size
--trail-size size
-i filename
--mdb-in filename
-o filename
--mdb-out filename
-e filename
--mdb-err filename
-m filename
--mdb-tty filename
The Mercury compiler takes special advantage of certain extensions provided by GNU C to generate much more efficient code. We therefore recommend that you use GNU C for compiling Mercury programs. However, if for some reason you wish to use another compiler, it is possible to do so. Here's what you need to do.