SUPA: On-Demand Strong Updates Analysis via Value-Flow Refinement

Yulei Sui     Jingling Xue    
School of Computer Science and Engineering , UNSW Australia

1. Description

SUPA is a new demand-driven Strong UPdate Analysis that computes points-to information on-demand via value-flow refinement. We formuate SUPA by solving a graph-reachability problem on a value-flow graph representation of the program, so that strong updates are performed where needed with imprecisely pre-computed value-flows being refined away, as long as the total analysis budget is not exhausted. SUPA facilitates efficiency and precision tradeoffs by allowing different pointer analyses to be applied in a hybrid multi-stage analysis framework.

2. Paper and Artifact

On-Demand Strong Update Analysis via Value-Flow Refinement ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE' 16)

Virtual Image of SUPA Framework: SUPA.ova (~5GB)

SUPA is built on top of SVF, a static value-flow analysis framework operating on LLVM-IR. SUPA will be integrated into SVF based on the latest release of LLVM (version 3.8.0). The core source code files reside under "include/DDA" and "lib/DDA". Its executable file "dvf" can be found at "Release+Asserts/bin/dvf" after a successful build.

To help users reproduce the data in the paper, we have also provided a virtual machine image, which contains the releases of SUPA, together with scripts and benchmarks used in our experimental evaluation, as described in the paper.

The OS in the virtual machine image is Ubuntu 12.04. A user account has been created with both its username and password as "pta". To run SUPA, you are advised to allocate at least 16GB memory to the virtual machine. The whole-program sparse flow analysis, denoted SFS in the paper requires large memory, as a lower memory budget may force OS to kill the running process when it is used to analyse some large programs, e.g., vim, gdb, emacs. Finally, a VirtualBox with version 4.1.12 or newer is required to run the image.

3. Quick Guidelines

To run the experiments as we did for in our paper, open a terminal and do:

cd /home/pta/pta/ # Go to the SUPA project directory, denoted as $SUPAHOME
. ./setup.sh # Set up environment variables (please note that there is a white space between the two dots)
cd SUPA-exp # Go to the experiment directory

To run three analyses for all 12 benchmarks, execute the following scripts:

./run.sh DFS # Run SUPA-FS analysis only
./run.sh CXT # Run SUPA-FSCS analysis
./run.sh SFS # Run whole-program sparse flow-sensitive analysis

Please note that all the results related to analysis times and memory usage in our paper are obtained on a machine with 3.7G Hz Intel Xeon 8-core CPU and 64 GB memory. On this platform, obtaining the results for SUPA may take about 30 mins with a budget of 1000, and obtaining the results for SFS may take more than five hours (especially for large programs, such as bash, vim, emacs).

Initially, the users are advised to analyse a few benchmarks with lower budgets using the default configuration files 'budget' and 'benchmarks'. To analyse all the benchmarks, please use another configuration file containing all the benchmarks (please remember to re-run everything if the configuration files are changed):

cp benchmarks.full benchmarks # configured to analyze all benchmarks
vi budget # configured with customized budget (e.g, 10000). SUPA completely re-run under each budget setting

After the analyses are all fully finished, you can collect data for tables and figures using scripts in the same folder:

./figure_9.sh # Reproduce the data in Figure 9
./table_2.sh # Reproduce the data in Table 2
./figure_10.sh # Reproduce the data in Figure 10
./figure_11.sh # Reproduce the data in Figure 11
./table_3.sh # Reproduce the data in Table 3

For comparison purposes, we have also provide the experimental data used in the paper are provided under "$SUPAHOME/SUPA-exp/supa-fse/".

To reproduce the results shown in tables and figures with the provided data, issue the following commands:

./figure_9.sh supa-fse
./table_2.sh supa-fse
./figure_10.sh supa-fse
./figure_11.sh supa-fse
./table_3.sh supa-fse
4. Benchmarks

The benchmarks used in our paper are stored under “SUPA_exp" directory. There are 12 programs from open-source programs:

a2ps, bison, bash, emacs, grep, make, hmmer, less, milc, sendmail, tar, vim

The source code of each program is compiled into bit code files using clang and then merged together using LLVM Gold Plugin at link time stage (LTO) to produce a whole-program bc file.

We have also provided a micro-benchmark suite PTABen to validate the correctness of our pointer analysis algorithms for both single- and multi-threaded C programs. Please refer this link to see the source code hosted on GitHub. The layout of the test suite is as follows:

fi_tests : flow-insensitive tests
fs_tests : flow-sensitive tests
cs_tests : context-sensitive tests
path_tests : path-sensitive tests
complex_tests : complex test cases simplified from real programs
scripts : scripts to run the tests

Some micro-benchmarks can be found under "/home/pta/pta/tests/". You can compile them into LLVM bitcode files by running:

clang -c -emit-llvm -g example.c -o example.bc # add "-g" option for debugging purpose
opt -mem2reg example.bc -o example.opt # optional

For each bitcode file (.bc or .orig benchmark files from SUPA-exp) generated, you can try to analyse it by using

dvf -dfs -flowbg=1000 example.bc # Run SUPA-FS (flow-sensitive only) with a budget of 1000
dvf -cxt -flowbg=1000 -cxtbg=1000 example.bc # Run SUPA-FSCS (flow and context-sensitive) with a budget of 1000 in both FSCS and FSCI stages
dvf -cxt -maxcxt=3 example.bc # Enable k-limiting context-sensitive analysis of SUPA-FSCS, where k=3.

For large programs with multiple bitcode files, please refer to Install and Use LLVM Gold Plugin in SVF for details on how to link several bitcode files into one.

5. LICENSE

GPLv3

6. Contacts and Collaborations

Feel free to contact : ysui AT cse.unsw.edu.au or jingling AT cse.unsw.edu.au. Any comments, contributions and collaborations are welcomed.