Distributed Computation of Diagnoses for Inconsistent Multi-Context Systems
This site is about the master thesis "Distributed Computation of Diagnoses for Inconsistent Multi-Context Systems" by Fabian Salcher.
Here you find the thesis itself as well as the source code and the results and settings of the experimental evaluation.
Thesis and Abstract
Multi-Context Systems (MCS) are systems of distributed knowledge bases which interact via so
called bridge rules. The MCSs we are interested in are nonmonotonic and therefore bridge rules
can cause inconsistencies in the MCS while the knowledge bases for themselves are consistent.
We will develop an algorithm which identifies those inconsistencies and proposes bridge rule
modifications to the user which will make the system consistent. This algorithm will be effective
with respect to requesting only as much information from the distributed knowledge bases as
necessary. Therefore, for a user only interested in a part of the system, it is not necessary to
know the whole system. We will show that this algorithm is sound and complete and present
data demonstrating the performance of a reference implementation. To increase the performance
of the algorithm we also propose further optimizations like edge and subset pruning and show
the effectiveness of those modifications on the reference implementation.
The whole thesis can be found here:
Distributed Computation of Diagnoses for Inconsistent Multi-Context Systems
Implementation
The source code of the implementation is based on the dmcs solver and can be found
on SourceForge.
Experimental Evaluation
Here you find the startup scripts, problem instances, and log files of the experimental evaluation. Each of the packages is organized as follows: The root directory contains a log file for each
observed property (max. memory, time consumption, etc.) and each log file contains the results from all the tested instances for a quick overview. Then you find a sub directory for each
problem instance configuration which itself contains a sub directory for each randomly generated problem instance. Those directories then contain the start up scripts, the knowledge base
including bridge rules, and other files necessary for running the dmcs* system. In the log sub directory you will find all the log output for each of the test runs.
For more information about the log output or the problem instances and their naming have a look at the chapter "Experimental Evaluation" in the thesis.