Mobile Benchmark Framework

This webpage represents the online documentation for the submission "A Cross-Platform Benchmark Framework for Mobile Semantic Web Reasoning Engines" to the 13th International Semantic Web Conference (ISWC 2014). This documentation should be considered complementary to the contents of that submission. At a later point, this documentation, together with the related software artefacts, will likely be moved to an online, freely accessible code repository such as GitHub.

The code is released under the Apache License 2.0. All code can be found here (see here for info on how to deploy and run the code). The rule- and dataset used in the IMPACT-AF example benchmark can be found here.

Contact: William Van Woensel

Deployment & Usage

<< back to top

For most developers using the framework, it is likely easiest to directly import the Eclipse projects (via File > Import > General > Existing projects into workspace).

(after import, some errors can be listed such as no matching Java version being installed or certain referenced target runtimes not existing; this is easily remedied by selecting the flagged problem (see "Markers" View), right-clicking it and selecting Quick Fix)

We note that the assets/www folder of the Android project contains all the JavaScript & HTML code contained inside the first project. Making changes to the JavaScript code in this folder will be directly reflected when running the Android project. In other words, you do not need to re-deploy anything using PhoneGap.

So, this brings us for the reason to include the first project in the distribution as well. The first project allows the JavaScript part of the Mobile Benchmark Framework, including rule- and data conversion as well as benchmarking JavaScript reasoning engines, to be tested in a desktop browser (which is much easier than Android testing!). This can be done by running the BenchmarkEngineJS project in Eclipse on a Java servlet container server (e.g., Apache Tomcat).

Important: note that the "deviceready" event fired by PhoneGap will never occur in this case. This can be remedied by commenting out the listener for this event in index.html (no other changes need to be made):

// document.addEventListener("deviceready", function() {
// console.log("cordova.deviceready");

var script1 = document.createElement("script");
script1.src = "js/runExperiment.js";

var body = document.getElementsByTagName("body").item(0);
body.appendChild(script1);
// });
    
Code 1: Part from index.html (1).

Both the BenchmarkEngineJS and BenchmarkEngineAndroid projects can thus be used to perform benchmarks (as well as rule- and data conversion); whereby the former is specifically meant for debugging purposes.

Below, we go over the different steps to run benchmarks (and perform conversions). For the BenchmarkEngineJS project, the code referenced below will be located in the WebContent/ folder. For the BenchmarkEngineNative project, the code can be found under assets/www.

1) Run the Conversion Web service using a Java servlet container such as Apache Tomcat. It's likely easiest to do this directly in Eclipse. Subsequently, edit the Conversion Web service URL in js/config.js to correspond to its current IP address and port:

config = {
    webService : {
        url : "http://134.190.145.221:8081/SPIN%20WebService/convert/",
        timeout : 10000 // connection timeout
    }
};
    
Code 2: config.js

2) The index.html file specifies to either run a benchmark or a separate conversion process (comment out the appropriate line):

var script = "js/benchmark.js"; // (run benchmark)
var script = "js/convert.js"; // (perform conversion separately)
    
Code 3: Part from index.html (2).

Conversion: using js/convert.js, the conversion process can be run separately to avoid contacting the Conversion Web service during benchmarks (note that this conversion time is never included in performance measurements). See the comments in js/convert.js for more details.

Benchmarking: js/benchmark.js contains the necessary code to run the benchmark process, based on the configuration given in bConfig.js. To run benchmarks, developers should normally not concern themselves with the benchmark.js file. Instead, bConfig.js determines all configurable aspects of the benchmarks. Below, we show the bConfig.js file for (part of) the example benchmark from the paper (see code comments for more info):

bConfig = {
    // processFlow determines the timing of reasoning 
    processFlow : 'frequent_reasoning', 
        // options: frequent_reasoning, incremental_reasoning

    // ID of the mobile reasoning engine to be benchmarked 
    // (note that the engine also determines a second process flow, 
    // indicating the operation ordering)
    engine : 'RDFStore_JS',

    // number of times the benchmark should be run, to minimize
    // impact of concurrent OS processes
    nrRuns : 10,

    // ruleset configuration
    ruleSet : {
        // path of the ruleset
        path : "res/rules/af/benchmark.spin-rules",
        format : 'SPIN' // options: SPIN, native
    },

    // in case of 'incremental reasoning': 
    //      include baseline & single dataset
    // else, put info directly under 'dataSet'
    dataSet : {
//      baseline : {
            path : "res/data/af/75/benchmark.nt",

            format : 'RDF', // options: RDF, native
            syntax : 'N-TRIPLE' // options: RDF/XML, N-TRIPLE, TURTLE, TTL, N3,
                                // RDF/XML-ABBREV
//      },
//      single : {
//          path : "res/data/af/25/benchmark2_2.nt",
//
//          format : 'RDF',
//          syntax : 'N-TRIPLE'
//      }
    }
};
    
Code 4: bConfig.js

Currently, the bConfig.js setup performs a benchmark for a particular mobile reasoning engine using the mobile IMPACT-AF rule- and dataset. To investigate scalability performance, increasing sizes of this dataset are available for benchmarking. The rule- and dataset can respectively be found under /res/rules/af/ and /res/data/af/. To allow developers to check inferencing completeness, each mobile reasoning engine also outputs the inferred triples. The correct inferred triples for each dataset size can be found in res/af-results.txt.

3) Run either the BenchmarkEngineJS project on a server, to test rule- and data conversion or debug JavaScript reasoning engines; or the BenchmarkEngineAndroid project on an Android device, to have access to the complete Mobile Benchmark Framework. Instructions on how to deploy Android projects in Eclipse is beyond the scope of this online documentation; we refer to the ADT documentation for more information.

Extensions

<< back to top

The Benchmark Framework can be extended in three ways; below, we elaborate on these three extension points.

Rule- and data converters

<< back to top

Conversion is performed in the Conversion Web service. Each converter class hereby implements a uniform interface, and converts rules or data into a particular native reasoning engine format.

Upon receiving SPIN rules to be converted, the Web service uses the TopBraid SPIN API to parse the rules into an Abstract Syntax Tree (AST), which can be visited using their Visitor classes. Currently, each developed converter has three related classes; one converter class and two visitor classes (technically, only the former class is required; the additional two classes provide better encapsulation). Below, you can find the interface that the converter class needs to implement (showing only the abstract methods; also note that a unique ID parameter needs to be passed to the constructor):

public abstract class SPIN2 extends RuleConverter {

    public SPIN2(String id) {
        super(id);
    }

    ...    

    public abstract String convert(Construct query) throws ConvertException;

    ...

    // this method allows resetting internal state after a conversion request
    // (not mandatory for subclasses)
    public void reset() {}
}
    
Code 5: SPIN2 Rule Converter interface.

(to avoid confusion, note that "SPIN2" stands for "converting SPIN to .." and does not indicate the supported SPIN version)

For instance, the convert method implemented by the SPIN2Jena class (converting SPIN rules to Jena format) can be found below:

public String convert(Construct query) throws ConvertException {
    // (instantiate visitor class to visit the parsed query)
    SPIN2JenaVisitor visitor = new SPIN2JenaVisitor(this);

    // 1) Convert rule head using visitor
    query.getWhere().visit(visitor);
    String leftPart = visitor.getLeftPart();

    // 2) Convert rule body
    String rightPart = "";

    // (convert all triple patterns into Jena string format)
    List templates = query.getTemplates();
    for (TripleTemplate template : templates)
        rightPart += "\n(" + ConvertUtils.toString(template) + ")";
        
    // 3) Generate rule
    String rule = "[R" + (ctr++) + ": " + leftPart + "\n" + "->" + rightPart + "\n" + "]";

    // (Jena rule format doesn't allow triangle brackets around datatypes..)
    rule = rule.replaceAll("\\^\\^<([^>]*)>", "^^$1");
    
    return rule;
}
    
Code 6: Example implementation of the abstract convert method.

New converter classes are listed (name & package) in a configuration file (spin2s.txt and rdf2s.txt for rules and data respectively, found under the WebContent/ folder). These files are read by the Web service at startup time to dynamically load converter class definitions. We refer to the Conversion Web service source code (see *.convert.to.* packages) for current converter implementations.

Reasoning setups

<< back to top

Each reasoning setup is represented by a JavaScript object. We show the reasoning setup for FrequentReasoning-LoadDataExecuteRules below:

// NOTE global scope contains engine, bConfig objects
setup = {

    runBenchmark : function(engine, bConfig, timer, callback) {
        // data & rules
        var dataSet = bConfig.dataSet, ruleSet = bConfig.ruleSet;

        // load triples into engine
        console.log("..loading data");
        engine.loadTriples(dataSet, timer, function() {

            // execute rules
            console.log("..executing rules");
            engine.execRules(ruleSet, timer, function(inferred) {
                console.log("inferred: " + JSON.stringify(inferred, null, 4));

                callback();
            });
        });
    },

    // checks whether the config is correctly structured (optional, and can be
    // as extensive as deemed necessary - often made mistakes during debugging..)
    checkConfig : function(bConfig) {
        if (!bConfig.dataSet.path) {
            console.log("Error: expecting 'path' in dataSet config");

            return false;

        } else
            return true;
    },

    // returns timer that times all operations relevant to this reasoning setup
    getTimer : function(engine, bConfig) {
        function BenchmarkTimer(config) {
            this.resultTimes = new ResultTimes();

            this.loadTriples = new Timer(config, 'loadTriples', this.resultTimes);

            this.createRules = new Timer(config, 'createRules', this.resultTimes);
            this.executeRules = new Timer(config, 'executeRules', this.resultTimes);
        }

        return new BenchmarkTimer([ engine.config.processFlow, bConfig.processFlow ]);
    }
};
    
Code 7: Reasoning setup JavaScript object for FrequentReasoning-LoadDataExecuteRules.

The runBenchmark method invokes operations from the uniform reasoning engine interface (loadTriples, execRules) to realize its particular process flows. In this case, the entire dataset is loaded and reasoning is performed once on the dataset (Frequent Reasoning); whereby first the data is loaded, and then the rules are executed (LoadDataExecuteRules). The checkConfig method checks whether the configuration (kept in bConfig.js, see above) is correct; this method can be as simple or extensive as needed, based on likely errors that will occur. Finally, the getTimer method returns a timer object to time all operations relevant to the reasoning setup (other setups, involving for instance Incremental Reasoning, need to time additional operations, such as a 2nd load data & execute rule operation).

Each reasoning setup object is added to the js/setups/ folder. In addition, the object is listed in the mapping.json file in the same folder, which maps combinations of process flows (e.g., Frequent Reasoning, LoadDataExecuteRules) to the filename of the corresponding setup object:

{
    "frequent_reasoning": {

        "load_data_exec_rules": "setup1.js",
        "load_rules_data_exec" : "setup3.js"
    },

    "incremental_reasoning" : {
        "load_data_exec_rules": "setup2.js",
        "load_rules_data_exec" : "setup4.js"
    }
}
    
Code 8: mapping.json.

Reasoning engine plugins

<< back to top

Similarly, a mobile reasoning engine can be plugged into the framework by writing a JavaScript "plugin" object. Below, we show the JavaScript plugin object for the RDFQuery engine (showing only the important parts):

var libPath = "js/engines/rdfquery/libs/";

engine = {
    // unique ID for the engine
    id : 'RDFQuery',

    config : {
        // libraries to be loaded before the plugin can be used
        libs : [ libPath + "jquery.rdfquery.rules-1.0.js" ],

        convertRules : true, // whether rules should be converted
        ruleFormat : 'RDFQuery', // target rule format
        // (whether rules need to be passed in an array; for convenience)
        rulesToArray : true, 

        // whether data should be converted
        // (e.g., in this case, RDFQuery does not accept datatypes in RDF)
        convertData : true,
        dataFormat : 'RDFQuery', // target data format
        // (whether triples need to be passed in an array; for convenience)
        dataToArray : true,

        // process flow dictated by reasoning engine 
        processFlow : 'load_data_exec_rules
            // options: 'load_data_exec_rules', 'load_rules_data_exec'
    },

    ...

    // load triples from dataSet into the engine
    loadTriples : function(dataSet, timer, callback) {
        var triples = dataSet.data;

        // if engine had not yet been created, create new one
        if (!this.store)
            this.store = $.rdf();

        // start timing the loadTriples operation
        timer.loadTriples.begin();

        for (var i = 0; i < triples.length; i++) {
            var triple = triples[i];

            // load each triple individually into the store
            this.store = this.store.add(triple);
        }

        // stop the timer
        timer.loadTriples.done();

        callback();
    },

    // execute rules from ruleSet
    execRules : function(ruleSet, timer, callback) {
        var rules = ruleSet.rules;

        for (var i = 0; i < rules.length; i++) {
            var rule = rules[i];

            // potentially, convert strings to JavaScript constructs 
            // (e.g., filters in RDFQuery are represented as JavaScript functions)
            rule = evalNReturn(rule);

            // start timing the createRules operation (incremental)
            timer.createRules.begin();
            // construct internal rule object for each rule
            rule = $.rdf.rule(rule.left, rule.right);
            // stop the timer
            timer.createRules.done();

            // start timing the executeRules operation (incremental)
            timer.executeRules.begin();
            // execute each rule on the engine
            this.store.reason(rule);
            // stop the timer
            timer.executeRules.done();
        }

        // get the inferred triples from the engine
        var inferred = this.store.databank.inferred;
        ...
        // return the inferred triples (these will be outputted)
        callback(inferred);
        ...
    }
};
    
Code 9: Plugin JavaScript object for the RDFQuery reasoning engine.

Each plugin object is put into a separate file and folder under js/engines/, both named after the unique reasoning ID (see line 5) in small caps. A separate subfolder called libs/ should contain any libraries (e.g., the reasoning engine code itself) that need to be loaded. These library files are then indicated by the plugin object in the config part (see line 9). In addition, this config part specifies whether and how rules & data should be converted, as well the process flow dictated by the engine (see line 24).

The object futher implements the uniform engine interface (methods loadTriples and execRules), and translates method invocations to the underlying reasoning engine. These two methods also invoke the passed timer object at the appropriate places to correctly time the different operations. To allow developers to determine inferencing completeness, a plugin is also expected to return an array of all inferred triples (see line 84).

Native mobile reasoning engines require a plugin class implemented on the native platform (e.g., Android). Analogous to the JavaScript engines, these plugin classes implement the uniform engine interface and specify the aforementioned information (i.e., engine ID, config info) Below, we show the Android plugin class for the AndroJena engine (showing only the important parts):

public class AndroJena extends RuleEngine_Process1 {

    ...

    public AndroJena() {
        // specify same info as for JavaScript plugins
        super("AndroJena", new EngineConfig_Process1(true, "Jena", false,
                false, null, false));
    }

    ...

    // load triples from dataSet into the engine
    public ResultTimes loadTriples(DataSetConfig dataSet) {
        Model curModel;
        ...
        ExperimentTimer timer = new ExperimentTimer("loadTriples");
        // start timing the loadTriples operation
        timer.begin();
        
        String triples = dataSet.getTriples();
        // load triples into the engine
        RDFReader reader = curModel.getReader(dataSet.getRdfSyntax().toString());
        reader.read(curModel, new StringReader(triples), "");

        // stop the timer
        timer.done();

        // return the timing results
        return new ResultTimes(timer.result());
    }

    // execute rules from ruleSet
    public ExecuteResults executeRules(RuleSetConfig ruleSet) {
        ResultTimes resultTimes = new ResultTimes();
        ...
        ExperimentTimer timer = new ExperimentTimer("createRules",
                resultTimes);

        // start timing the createRules operation
        timer.begin();
        // create internal rule objects for each rule
        List rules = Rule.parseRules(ruleSet.getRules());
        // stop the timer
        timer.done();
        
        timer = new ExperimentTimer("executeRules", resultTimes);

        // start timing the executeRules operation
        timer.begin();
        // execute rules on the engine
        GenericRuleReasoner reasoner = new GenericRuleReasoner(rules);
        infModel = ModelFactory.createInfModel(reasoner, model);

        Model dedModel = infModel.getDeductionsModel();
        // stop the timer
        timer.done();
        ...
        // get the inferred triples from the engine
        List inferred = new Vector();

        StmtIterator stmtIt = dedModel.listStatements();
        while (stmtIt.hasNext())
            inferred.add(stmtIt.next().toString());

        // return the inferred triples (these will be outputted), 
        // together with the timing results
        return new ExecuteResults(resultTimes, inferred);
    }
}
    
Code 10: Plugin Java (Android) object for the AndroJena reasoning engine.

This class is further listed in a file (assets/engines.txt) that is read by the native part of the Mobile Benchmark Engine, responsible for managing the native plugins (see below). In addition, developers need to add a dummy JavaScript plugin object for the engine, indicating the engine ID and the fact it concerns a native engine:

engine = {
    id : 'AndroJena',

    config : {
        libs : [],

        native : true
    }
};
    
Code 11: Dummy JavaScript plugin object for a native reasoning engine.

Behind the scenes, this dummy plugin is replaced by a proxy JavaScript component that implements communication with the native plugin, using the PhoneGap communication bridge (see js/libs/Proxy.js).

To manage these native engine plugins, the Mobile Benchmark Engine comprises a native part. This part consists of an Android class that receives incoming JavaScript function calls (NativeEnginesPlugin), which extends the CordovaPlugin (PhoneGap) class and thus acts as the communication bridge; and an Android Service (NativeEngineService), which performs the actual managing task. As mentioned, each native engine plugin is listed (name & package) in the assets/engines.txt read by the Android Service on startup time, which allows dynamically loading the native engine plugins.

Code Structure

<< back to top

In this section, we shortly elaborate on the code structure for the three projects.

Conversion Web service

<< back to top

The code is structured as follows:

Figure 1: Conversion Web service code structure.

Regarding Java source code, the *.convert.servlet package contains the Java servlet acting as the RESTful interface to the Conversion Web service. The *.convert.servlet.msg package contains Java Beans (messages) to be sent from and to the JavaScript code. The servlet uses the GSON library to serialize and deserialize such messages to and from JSON.

The *.convert.to.* packages contain the converter classes (and their accompanying visitor classes) to convert parsed SPIN AST's (and possibly RDF) to native engine formats. Each leaf package name (e.g., *.jena, *.nools) contains the code to convert rules or data to that particular format.

The WebContent/ folder contains the two files listing the converter classes, namely spin2s.txt and rdf2s.txt.

BenchmarkEngineJS

<< back to top

The code is structured as follows:

Figure 2: Benchmark Engine (JavaScript part) code structure.

The js/ folder contains all JavaScript code. Reasoning engine plugin objects are put into separate folders inside the engines/ folder. The libs/ folder contains any (external) JavaScript library used by the Mobile Benchmark Framework (e.g, jQuery, moment.js). The plugins/ folder (not to be confused with reasoning engine plugins!) comprises JavaScript code for PhoneGap plugins (see PhoneGap documentation). The setups/ folder comprises the different reasoning setup JavaScript objects. The bConfig.js, benchmark.js, config.js and convert.js files are described here.

The res/ folder includes the rules and data used by the benchmarks, respectively kept in the rules/ and data/ subfolders. Currently, these contain the IMPACT-AF rules and data used in our example benchmark. The af-results.txt file keeps the correctly inferred facts for each dataset. The index.html file represents the starting point of the Mobile Benchmark Framework, where a selection can be made between performing benchmarks or separately converting rules and data.

BenchmarkEngineNative

<< back to top

The code is structured as follows:

Figure 3: Benchmark Engine (Native part) code structure.

The *.benchmark package contains the NativeEnginesPlugin and NativeEnginesService classes for communication with JavaScript code and managing native reasoning engine plugins, respectively. The *.benchmark.config package comprises a number of enumerations and Java Bean classes to represent relevant configurations (e.g., for dataset and ruleset). Package *.benchmark.engine contains uniform interfaces for reasoning engine plugins, while *.benchmark.engine.androjena comprises a plugin implementation for the AndroJena reasoning engine. Finally, the assets/www/ folder contains all JavaScript and HTML code from the JavaScript part of the Mobile Benchmark Engine.