The Decision Diagram Merging System

The DDM System (decisiondiagramsplugin) adds support for working with decision diagrams to dlvhex. In consists of two major components:

The constrol script may simplify merging considerably.

For a quick introduction, take a look at our examples.

File Format Conversions

Decision diagrams can be stored in different file formats. While some of them are human-readable, others are better for automatic processing.

Our system is implemented on top of the logic programming system dlvhex. In order to process diagrams it is necessary to encode them as set of facts. But this is inconvenient for humans. Hence, the usual workflow is to store diagrams as DOT graphs and convert them to sets of facts only before merging. The output is then a diagram again encoded as a set of facts, which is back-converted into a DOT graph as a postprocessing step.

The supported file formats of our converter are described in the following subsections. To see how the converter is called from command line, read the section on Conversion.


The DOT file format is intuitively readable and thus fit for being used as human readable format for representing decision diagrams. Additionally it is well suited for being visualized using the dot tools.

Then the following snippet shows a decision diagram as DOT file.
digraph G {
    root -> case1 ["A<10"];
    root -> case2 ["A>20"];
    root -> elsecase["else"];
    root -> case3["else"];
    case1 -> case1a["B<10"];
    case1 -> case1b["else"];
    case2 -> case2a["B<16"];
    case2 -> case2b["else"];
    case1a["Clas sA"];

Valid decision diagrams must

Sets of Facts

dlvhex cannot directly load this format because its input must be a logic program. Thus a diagram must be represented using atom formulas. We define the following predicates:

The above diagram can therefore be implemented as follows.


A very simple and obvious translation from HEX programs into answer-sets is to to put all the facts simply as atoms in an answer-set. The above diagram can therefore also be implemented as:

RapidMiner XML Format

RapidMiner is an open-source data mining tool. It uses a priprietary XML file format to store decision trees. This format is also supported by our converter. The details are not relevant for practical work and are skipped therefore. It is only important to know that the import and export functionality for this file format is necessary to process RapidMiner classifiers by the decisiondiagramplugin.


For the conversion between the introduced file formats, the plugin installs a tool called graphconverter. It can be used to translate diagrams in any of the supported file formats into semantically equivalent versions in another format. Assume that the diagram is stored in file "". Then the conversion into the according HEX program is done by entering:

graphconverter dot hex < > mydiagram.hex

The result is a set of facts that can be loaded by dlvhex. After dlvhex has done its job, the output will be an answer-set, which is ill-suited for begin read by humans. Thus the plugin also supports conversions in the other direction. Assume that dlvhex' output is stored in file "" (using the silent mode such that the output contains the pure answer-set without any additional information about dlvhex). Then the conversion is done by:

graphconverter as dot < > mydiagram.hex

Between the two converter calls, the diagram is given as HEX program "mydiagram.hex" that can be processed by dlvhex. Even though in principle one can do arbitrary computations upon this representation, it is intended to be used as part of the input for a merging task.

Note that graphconverter reads from standard input and writes to standard output. It expects either one or two parameters. If one parameter is passed, it can be anything of:

Note that --toas and --todot are only abbreviations for frequent conversions. The more general program call (as used above) passes two parameters, where the first one states the source format and the second one the desired destination format. Both parameters can be anything from the following list.

Format Parameter name
DOT graph dot
HEX program hexprogram or hex
answer-set answerset or as
RapidMiner XML rmxml or xml

Applying Operators

This section presupposes familarity with the mergingplugin MELD, especially the usage of merging plans and operators. For a detailled description see the according documentation.

The DDM System ships with several special merging and modification operators for decision diagrams. They expect the belief bases to be sets of facts which were generated out of decision diagrams using the graphconverter. The output will again be a set of facts that encodes a diagram, which can be back-converted into a DOT file. Intermediate results are sets of answer-sets.

The following snippet shows a typical merging task definition that uses the operators unfold and tobinarydecisiontree. Note that the decision diagram encoded as logic program was directly pasted into the mapping rules. This was only done in order to give the program as one self-contained example. In practice, the mapping rules would rather access an external source by the use of external atoms.



The following subsections describe the operators that are included in the plugin. Note that this is just a very quick and informal description that should enable the user to explore the capabilities or adapt operators according to individual needs. For a more detailled and formal description we refer to the references.

Modification Operators

Operators with arity 1 are called modification operators. They do not integrate multiple diagrams but manipulate a single one. This is often useful to simplify the structure of the diagram before it is actually merged with others.


The input is a collection of belief sets, where each of them encodes general decision diagrams. The output will contain the same number of diagrams, where each of them has (independently) been converted into a tree. This is done by duplication of shared subtrees.


The operator expects the input to encode a diagram that is a tree, i.e. it contains no sharing of subnodes. Then the output is again a tree where each node has at most two successors (binary tree). This is done by introduction of intermediate nodes.


The operator is again unary. It expects its input to be a binary decision tree. Its output will a semantically equivalent binary decision tree, where on each path from the root to a leaf node, the variables are only queried in lexical ordering.


The input is a collection of belief sets containing an arbitrary number of decision diagrams. Each of them is then (independently) simplified by some algorithms that will leave the semantics of the diagram unchanged. That is, only the structure of the diagram will be modified, which makes them more readable.

The simplification algorithm will reuse common subtrees (an can thus be seen as the converse of unfolding) and eliminate redundant case distinctions.

Merging Operators

Now we describe the actual merging operators with an arity n> 1.


This operator is n-ary, i.e. arbitrary many diagrams can be passed. Additionally it expects arbitrary many key-value pairs as parameters, where the keys are ignored and the values are of form:

X>>Y or X>n>Y

where X and Y are the names of class labels (as used in leaf nodes) and n is a positive integer. A rule of form X>> Y expresses that "in doubt, X is preferred to Y", whereas X>n> Y states "X is preferred to Y if there are at least n more input diagrams that vote for X than for Y".

The output is a diagram where each domain element is classified according to this rules. Note that the rules are evaluated in top-down manner. That is, the result of of a prior rule can be overwritten by a later (applicable!) rule.


The input can be any number of (general) decision diagrams. The output is a diagram where each domain element is automatically classified by each of the inputs. Then the final class label is determined by majority decision. In case that this does not lead to a unique result, the input diagram with the least index forces its decision.


The operator expects exactly two ordered binary trees as input parameter. The result will again be an ordered binary tree of the following form.

For each node from the root to the leafs, if one of the input trees contains condition X # c1 and the other one X # c2, the resulting tree will contain X # ((c1 + c2) / 2), i.e. the mean of the comparison value is computed. In case that one of the inputs contains X # c1 and the other one Y # c2, the result will contain X # c1 at this position (since X is lexically smaller than Y), and the second diagram is incorporated recursivly in both subtrees.

In case of contradicting leaf nodes or incompatible comparison operators (e.g. X < c1 and X> c2), the result is "unknown".

Control Script

The merging process may be considerably simplified by the use of our control script. It will automatically call graphconverter and MELD in the right order.

This allows you to specify arbitrary input formats (DOT, HEX, Answer Set, RapidMiner XML) in your merging tasks, using the source-Attribute of belief bases.

A call of


will automatically do the conversion of the input and output diagrams, as well as the call of MELD. The script will print the name of the file with the final result.

Download of the Control Script: ddmcontrolscript.tar.gz

For an example, see Example 3


The following examples may be copied into a textfile and executed by the MELD System by entring.

dlvhex --merging --filter=root,innernode,leafnode,elseedge,conditionaledge

Example 1

This merging plan translates an arbitrary decision diagram into an ordered binary tree.

[common signature]
predicate: root/1;
predicate: innernode/1;
predicate: leafnode/2;
predicate: conditionaledge/5;
predicate: elseedge/2;

[belief base]
name: kb1;
mapping: "
    leafnode(leaf1, class1).
    leafnode(leaf2, class2).
    conditionaledge(c, a, c, \'<\', x).
    elseedge(c, b).
    conditionaledge(b, leaf2, b, \'<\', x).
    elseedge(b, leaf1).
    conditionaledge(a, leaf1, a, \'<\', x).
    elseedge(a, leaf2).

[merging plan]
    operator: orderbinarydecisiontree;
        operator: tobinarydecisiontree;
            operator: unfold;

Example 2

Here we see a merging plan for integrating two decision diagrams using userpreference operation.

[common signature]
predicate: root/1;
predicate: innernode/1;
predicate: leafnode/2;
predicate: conditionaledge/5;
predicate: elseedge/2;

[belief base]
name: kb1;
mapping: "
    leafnode(leaf1A, class1).
    leafnode(leaf2A, class2).
    conditionaledge(rootA, leaf1A, a, \'<\', x).
    elseedge(rootA, leaf2A).

[belief base]
name: kb2;
mapping: "
    leafnode(leaf1B, class1).     leafnode(leaf2B, class2).
    conditionaledge(rootB, leaf1B, b, \'<\', x).
    elseedge(rootB, leaf2B).

[merging plan]
    operator: userpreferences;
    preferencerule: "class1>> class2";

Example 3

The following merging plan is a generalization of Example 1. Instead of hard-coding the diagrams in the merging plan, it accesses the file (which can be an arbitrary diagram in DOT format). Call this merging plan with our Control Script to perform all required conversions automatically.

[common signature]
predicate: root/1;
predicate: innernode/1;
predicate: leafnode/2;
predicate: conditionaledge/5;
predicate: elseedge/2;

[belief base]
name: kb1;
source: "";

[merging plan]
    operator: orderbinarydecisiontree;
        operator: tobinarydecisiontree;
            operator: unfold;

Example: DNA Classification

DNA is the carrier of genetic information in all known living beings. Protein databases, such as SWISSPROT, store known DNA sequences and the proteins encoded by them. Nowadays, DNA strings are mostly created by automatic sequencing procedures.

However, since only a minor part of the overall DNA of an organism is actually coding proteins, it becomes necessary to filter the result of a sequencing procedure. Each sequence needs to be classified as coding or noncoding. This can be done by the use of statistical features, i.e., numeric values computed over sequences, which are known to vary significantly between the two classes. Such features can then be used to train decision diagrams using machine-learning tools

An advanced method is to train multiple decision diagrams and merge them subsequently into a single one. Some people (e.g., Steven Salzberg) have observed, that this has advantages compared to training of monolithic classifiers, such as increased accuracy, parallel computing, decrease of the size of the necessary training set.

We have repeated this experiment to show the capabilities of our system. For this purpose, we have implemented Steven Salzberg's merging procedure as merging operator for DDM. This archive linked below contains all the files needed for the DNA classification example.

To run the experiment, perform the following steps:

  1. The three input models (model1.xml, model2.xml, model3.xml) are given in RapidMiner's file format. They were trained over a set of sequences drawn randomly from file "dna_trainingdata.xls" resp. "dna_trainingdata.ods". For each of the input trees, we have drawn 10 sequences.

  2. Using graphconverter, we have converted each of the input trees in RapidMiner's format into sets of facts:
    graphconverter rmxml hex < model1.xml> model1.hex
    graphconverter rmxml hex < model2.xml> model2.hex
    graphconverter rmxml hex < model3.xml> model3.hex

    Note: The files modeli.png show renderings of the trees. This is only for convenience of the reader and not required to create the merged tree.

  3. The trees are then merged into by executing the merging plan, which will apply the merging algorithm developed by Steven Salzbarg.

    This produces a merged classifier, encoded as belief set, which is stored in file

  4. The output tree is then converted into RapidMiner's format by executing:

    graphconverter as rmxml <> model_merged.xml

  5. Finally, this model can be loaded into RapidMiner and used for classification.

    We have used a set consisting of 2000 sequences (1000 coding, 100 noncoding), which is shown in file "dna_testdata.xls" resp. "dna_testdata.ods".
    The DNA datasets stem from They were extracted by Fickett and Tung from the Human Genome Project in 1992.
Alternatively, you may execute, which will automatically merge all input decision diagrams named modelXYZ.xml in the current directory (where XYZ are arbitrary strings) into output decision diagram named merged_model.xml.

Required files: dnaclassification.tar.gz

Note: The file contains input decision diagrams model1.xml, model2.xml, model3.xml. You may replace them by your own diagrams, e.g., trained by RapidMiner, and experiment with the resulting diagram merged_model.xml.


For a more detailed and formal description of the operators, see Merging of Biomedical Decision Diagrams. For an introduction to the belief merging framework see Development of a Belief Merging Framework for dlvhex.


dlvhex source code @

Popular Plugins
Action Plugin
DecisionDiagrams Plugin
Description Logics Plugin
Description Logics Lite Plugin
MELD: Belief Merging Plugin
Nested HEX Plugin
MCSIE Plugin
String Plugin
dlvhex-semweb Project

User Guide
Writing Plugins in C++
Writing Plugins in Python