This page contains information about test instances and benchmark logfiles.
We have used three benchmarks, a synthetic one and two benchmarks motivated by real-world applications. We summarized the runtimes for the configurations in benchmarks.ods.
In our first (synthetic) example program we consider a set S
of elements which is partitioned into two subsets
S1
and S2
, such that S1
contains at most two elements.
This is realized with HEX-programs using external atoms of kind &setminus[p,q](X)
, which computes the set difference
of the extensions of p
and q
.
We use the program template
partitioning.hex, specifying two cyclic rules which realize the partitioning, and give the solver a hint on monotonicity in the first input predicate.
For benchmarking we use a benchmark script which automatically iterates through instance sizes and instantiates the program template accordingly.
The script can be downloaded here (and is also contained in the dlvhex repository):
benchmark.sh.
It is called like this:
benchmark.sh [Program] [Min Domain Size] [Max Domain Size] [Timeout/s] [Configuration Strings (optional)]
,
Program
is the Name of a HEX-programMin Domain Size
and Max Domain Size
specify the range of instance sizesTimeout
specifies the maximum runtime per instanceConfiguration String
The benchmark script evaluates Program
for each integer i
between Min Domain Size
and Max Domain Size
,
and adds all facts domain(1), ..., domain(i)
to the program prior to evaluation. dlvhex is then called once per configuration
specified in the last argument.
An example call is:
./benchmark.sh partitioning.hex 1 20 300 ";--welljustified"
partitioning.hex
for each instance size from 1
to 20
, once
with default arguments and once with --welljustified
, i.e., switching from FLP to well-justified FLP semantics.
For the experiments we used dlvhex 2 from github (hexhex/core) and the MCS-IE plugin (hexhex/mcsieplugin).
To generate the instances, we used the DMCS benchmark instance generator, which is also available as a precompiled binary.
All benchmark instances are contained in the following archive: mcs_benchmarks.tar.bz2. (Instances are sorted whether they are consistent or inconsistent.)
For the experiments we used dlvhex 2 from sourceforge (hexhex/core) and the Argumentation plugin (hexhex/benchmarks/tree/master/argu). This plugin internally uses many encodings from ASPARTIX, in particular prefex.dl.
To generate the instances, we used the argumentation instance generator from CEGARTIX, in particular the Java tool for generating Argumentation Frameworks which is contained in the following archive: graph.tar.gz.
All benchmark instances are contained in the following archive: argu_instances.tar.bz2.
$Id: index.html 153 2010-12-20 15:24:11Z ps $