OPODIS17 artifact (under submission)

Synthesis of Distributed Algorithms with Parameterized Threshold Guards

 

This is an artifact that accompanies the submission by Marijana Lazic, Igor Konnov, Josef Widder, and Roderick Bloem. If you have any questions about the artifact or the experiments, feel free to ask Igor Konnov.

1. Virtual machine

You can download the virtual machine (VirtualBox, 64-bit, 1.8 GB) from TODO.

The virtual machine contains Debian GNU/Linux 3.16 with all necessary packages installed. The packages are installed under the user account named ‘user’ with the password ‘user’. If you have to use the root account, the password is ‘root’. Alternatively, you can also use the ‘sudo’ command.

2. Package structure

The artifact files are located under the directory ‘/home/user/opodis17-submission3’, whose structure is as follows:

  • benchmarks/ contains the benchmarks used in the paper, encoded as generalized threshold automata.
  • bymc/ contains the tool (binaries and source code).

3. Platform requirements

We ran the experiments on two systems: a laptop and the Vienna Scientific Cluster (VSC-3). The laptop was equipped with 16 GB of RAM and Intel® CoreTM i5-6300U processor with 4 cores, 2.4 GHz. The cluster VSC-3 consists of 2020 nodes, each equipped with 64 GB of RAM and 2 processors (Intel® XeonTM E5-2650v2, 2.6 GHz, 8 cores) and internally connected with an Intel QDR-80 dual-link high-speed InfiniBand fabric.

The virtual machine can be used to run experiments at your own computer, e.g., a laptop. If you like to run the experiments with a cluster, you have to set up you own environment at your cluster.

 

4. Installation instructions

This virtual machine already contains all the necessary tools and libraries:

  • ocaml 4.x and ocamlbuild
  • ocaml libraries: batteries, ocamlgraph, sexplib, ocaml_plugin
  • python 2.x
  • Z3 4.5.0

In case you want to install the tool from scratch, please check the file ~/opodis17-submission3/bymc/README.txt.

5. Benchmarks

TODO

 

6. Running the individual experiments

In this section, we give the commands to check the property unforg of the algorithm bcast-byz.pml. To find the properties of other benchmarks, check the table in Section 5, or search the Promela code for lines that look like ltl property_name {...}.

The following command runs the parameterized model checking technique presented in the paper:

$ ~/popl17-artifact16/bymc/bymc/verifypa-post ~/popl17-artifact16/benchmarks/promela/bcast-byz.pml unforg

you should see the following line in the output:

The property is verified in 0 refinement steps

(Since this technique uses integer counters, the number of refinement steps is always zero.)

When you pass -D BUG1=1, which adds more faults than the algorithm can tolerate, that is:

$ ~/popl17-artifact16/bymc/bymc/verifypa-post ~/popl17-artifact16/benchmarks/promela/bcast-byz.pml unforg -D BUG1=1

you should see the following line in the last six lines of the output:

> SLPS: counterexample for unforg found

The file ‘verdict.txt’ (alternatively, you can open ‘verdict.csv’ in a spreadsheet editor) contains detailed statistics, e.g., the interesting fields are as follows:

  • '06:total-sec' contains the total running time (plotted in Figure 9 of the paper),
  • '43:post-maxreskb' contains the consumed resident memory (in kilobytes, plotted in Figure 9 of the paper),
  • '52:post-nschemas' contains the number of explored lassos (plotted in Figure 9 of the paper),
  • '48:post-nlocs' and '49:post-nrules' contain the number of threshold automaton locations and rules respectively,

NOTE: some benchmarks (ABA, CBC, CF1S, C1CS, BOSCO) require several cases to consider (CASE1, CASE2, etc.). Precise flags are given in the table in Section 5. For instance,

$ ~/popl17-artifact16/bymc/bymc/verifypa-post ~/popl17-artifact16/benchmarks/promela/bosco.pml lemma3_0 -D CASE3=1 -D WEAKLY_1S=1

8. Log files

The log files are contained in the directory ~/popl17-artifact16-submission-oct16/logs/table2. The structure is as follows:

  • out/1/$i/stdout contains the tool output for an experiment $i.
  • x/<benchmark>-<spec>-exp<i>-<timestamp> contains the files generated by the tool when verifying an experiment number <i> for a bechmark <benchmark>, and a specification <spec>. The files of interest are:
    • post.out contains the tool output.
    • fuse.sk contains the (generated) description of a threshold automaton.

9. Source files

The source files (in OCaml) are located under the directory 'popl17-artifact16/bymc/bymc/src'. The technique presented in this paper is implemented in the module 'schemaCheckerLtl.ml'. Its top-level function is called find_error, which is called by the class 'SchemaCheckerPlugin'. The implementation of the reachability analysis, which was introduced in [Konnov,Veith,Widder’15], can be found in the module 'schemaChecker.ml'.

Latest News

FORSYTE’s 2018 paper awards

FORSYTE has had a quite successful year: Adrian Rebola Pardo and his co-authors received the IJCAR best paper award for their paper Extended Resolution Simulates DRAT, Mitra Tabaei Befrouei and her co-authors received an OOPSLA 2018 Distinguished Paper award for their paper Randomized Testing of Distributed Systems with Probabilistic Guarantees, and Thomas Pani received the […]

Continue reading

OOPSLA Distinguished Paper Award

Mitra Tabaei Befrouei and her co-authors from MPI-SWS Burcu Kulahcioglu Ozkan, Rupak Majumdar, and Filip Niksic, received an OOPSLA’18 Distinguished Paper Award for their contribution “Randomized Testing of Distributed Systems with Probabilistic Guarantees” (Open Access article). Congratulations!

Continue reading

Zvonimir Rakamiric visiting FORSYTE

Prof. Zvonimir Rakamiric from the School of Computing at the University of Utah is spending his sabbatical with the FORSYTE group at TU Wien. He is generously sponsored by the Wolfgang Pauli Institute and a Pauli Fellow.

Continue reading

Full news archive