Tutorials
In this section we demonstrate the usage of the platform for Algorithm Configuration, the creation of Algorithm Portfolios and Algorithm Selection.
Setting up Sparkle
Before running Sparkle, you probably want to have a look at the settings described in the Platform section. In particular, the default Slurm settings should be reconfigured to work with your cluster, for example by specifying a partition to run on.
Recompilation of example Solvers
Although the examples come precompiled with the download, in some cases they may not directly work on your target system due to certain target-system specific choices that are made during compilation. You can follow the steps below to re-compile.
CSCCSat
The CSCCSat Solver can be recompiled as follows in the Examples/Resources/Solvers/CSCCSat/
directory:
unzip src.zip
cd src/CSCCSat_source_codes/
make
cp CSCCSat ../../
MiniSAT
The MiniSAT solver can be recompiled as follows in the Examples/Resources/Solvers/MiniSAT/
directory:
unzip src.zip
cd minisat-master/
make
cp build/release/bin/minisat ../
PbO-CCSAT
The PbO-CCSAT solver can be recompiled as follows in the Examples/Resources/Solvers/PbO-CCSAT-Generic/
directory:
unzip src.zip
cd PbO-CCSAT-master/PbO-CCSAT_process_oriented_version_source_code/
make
cp PbO-CCSAT ../../
TCA and FastCA
The TCA and FastCA solvers, require GLIBCXX_3.4.21
. This library comes with GCC 5.1.0
(or greater). Following installation you may have to update environment variables such as LD_LIBRARY_PATH, LD_RUN_PATH, CPATH
to point to your installation directory.
TCA can be recompiled as follows in the
Examples/Resources/CCAG/Solvers/TCA/
directory:
unzip src.zip
cd TCA-master/
make clean
make
cp TCA ../
FastCA can be recompiled as follows in the Examples/Resources/CCAG/Solvers/FastCA/
directory:
unzip src.zip
cd fastca-master/fastCA/
make clean
make
cp FastCA ../../
VRP_SISRs
VRP_SISRs solver can be recompiled as follows in the Examples/Resources/CVRP/Solvers/VRP_SISRs/
directory:
unzip src.zip
cd src/
make
cp VRP_SISRs ../
Algorithm Runtime Configuration
These steps can also be found as a Bash script in Examples/configuration.sh
Initialise the Sparkle platform
sparkle initialise
Add instances
Add train, and optionally test, instances (in this case in CNF format) in a given directory, without running solvers or feature extractors yet
sparkle add_instances Examples/Resources/Instances/PTN/
sparkle add_instances Examples/Resources/Instances/PTN2/
Add a configurable solver
Add a configurable solver (here for SAT solving) with a wrapper containing the executable name of the solver and a string of command line parameters, without running the solver yet
The solver directory should contain the solver executable, the sparkle_solver_wrapper
wrapper, and a .pcs
file describing the configurable parameters
sparkle add_solver Examples/Resources/Solvers/PbO-CCSAT-Generic/
If needed solvers can also include additional files or scripts in their directory, but keeping additional files to a minimum speeds up copying.
Configure the solver
To perform configuration on the solver to obtain a target configuration we run:
sparkle configure_solver --solver Solvers/PbO-CCSAT-Generic/ --instance-set-train Instances/PTN/
This step should take about ~10 minutes, although it is of course very cluster / slurm settings dependant.
Validate the configuration
To make sure configuration is completed before running validation you can use the wait
command
sparkle wait
Now we can validate the performance of the best found parameter configuration against the default configuration specified in the PCS file. The test set is optional.
sparkle validate_configured_vs_default --solver Solvers/PbO-CCSAT-Generic/ --instance-set-train Instances/PTN/ --instance-set-test Instances/PTN2/
Generate a report
Wait for validation to be completed
sparkle wait
Generate a report detailing the results on the training (and optionally testing) set. This includes the experimental procedure and performance information; this will be located in a Configuration_Reports/
subdirectory for the solver, training set, and optionally test set like PbO-CCSAT-Generic_PTN/Sparkle-latex-generator-for-configuration/
sparkle generate_report
By default the generate_report
command will create a report for the most recent solver and instance set(s). To generate a report for older solver-instance set combinations, the desired solver can be specified with --solver Solvers/PbO-CCSAT-Generic/
, the training instance set with --instance-set-train Instances/PTN/
, and the testing instance set with --instance-set-test Instances/PTN2/
.
Run ablation
We can run ablation to determine parameter importance based on default (from the .pcs
file) and configured parameters.
To run ablation using the training instances and validate the parameter importance with the test set
sparkle run_ablation --solver Solvers/PbO-CCSAT-Generic/ --instance-set-train Instances/PTN/ --instance-set-test Instances/PTN2/
Generate a report
Wait for ablation to be completed
sparkle wait
Generate a report including ablation, and as before the results on the train (and optionally test) set, the experimental procedure and performance information; this will be located in a Configuration_Reports/
subdirectory for the solver, training set, and optionally test set like PbO-CCSAT-Generic_PTN/Sparkle-latex-generator-for-configuration/
sparkle generate_report
The ablation section can be suppressed with --no-ablation
Immediate ablation and validation after configuration
By adding --ablation
and/or --validate
to the configure_solver
command, ablation and respectively validation will run directly after the configuration is finished.
There is no need to execute run_ablation
and/or validate_configured_vs_default
when these flags are given with the configure_solver
command
Training set only
sparkle configure_solver --solver Solvers/PbO-CCSAT-Generic/ --instance-set-train Instances/PTN/ --ablation --validate
Training and testing sets
Wait for the previous example to be completed
sparkle wait
sparkle configure_solver --solver Solvers/PbO-CCSAT-Generic/ --instance-set-train Instances/PTN/ --instance-set-test Instances/PTN2/ --ablation --validate
Run configured solver
Run configured solver on a single instance
Now that we have a configured solver, we can run it on a single instance to get a result.
sparkle run_configured_solver Examples/Resources/Instances/PTN2/Ptn-7824-b20.cnf
Run configured solver on an instance directory
It is also possible to run a configured solver directly on an entire directory.
sparkle run_configured_solver Examples/Resources/Instances/PTN2
Algorithm Quality Configuration
We can configure an algorithm too based on some quality objective, that can be defined by the user. See the SparkleObjective page for all options regarding objective defintions.
These steps can also be found as a Bash script in Examples/configuration_qualty.sh
Initialise the Sparkle platform
sparkle initialise
Add instances
Now we add train, and optionally test, instances for configuring our algorithm (in this case for the VRP). The instance sets are placed in a given directory.
sparkle add_instances Examples/Resources/CVRP/Instances/X-1-10/
sparkle add_instances Examples/Resources/CVRP/Instances/X-11-20/
Add a configurable solver
Add a configurable solver (In this tutorial its an algorithm for vehicle routing) with a wrapper containing the executable name of the solver and a string of command line parameters.
The solver directory should contain the sparkle_solver_wrapper.py
wrapper, and a .pcs
file describing the configurable parameters.
sparkle add_solver Examples/Resources/CVRP/Solvers/VRP_SISRs/
In this case the source directory also contains an executable, as the algorithm has been compiled from another programming language (C++
). If needed solvers can also include additional files or scripts in their directory, but keeping additional files to a minimum speeds up copying.
Configure the solver
Perform configuration on the solver to obtain a target configuration. For the VRP we measure the absolute quality performance by setting the --objectives
option, to avoid needing this for every command it can also be set in Settings/sparkle_settings.ini
.
sparkle configure_solver --solver Solvers/VRP_SISRs/ --instance-set-train Instances/X-1-10/ --objectives quality
Validate the configuration
To make sure configuration is completed before running validation you can use the sparkle wait
command
sparkle wait
Validate the performance of the best found parameter configuration. The test set is optional. We again set the performance measure to absolute quality.
sparkle validate_configured_vs_default --solver Solvers/VRP_SISRs/ --instance-set-train Instances/X-1-10/ --instance-set-test Instances/X-11-20/ --objective quality
Generate a report
Wait for validation to be completed
sparkle wait
Generate a report detailing the results on the training (and optionally testing) set. This includes the experimental procedure and performance information; this will be located in a Configuration_Reports/
subdirectory for the solver, training set, and optionally test set like VRP_SISRs_X-1-10_X-11-20/Sparkle-latex-generator-for-configuration/
. We again set the performance measure to absolute quality.
sparkle generate_report --objective quality
By default the generate_report
command will create a report for the most recent solver and instance set(s). To generate a report for older solver-instance set combinations, the desired solver can be specified with --solver Solvers/VRP_SISRs/
, the training instance set with --instance-set-train Instances/X-1-10/
, and the testing instance set with --instance-set-test Instances/X-11-20/
.
Configuring Random Forest on Iris
We can also use Sparkle for Machine Learning approaches, such as Random Forest for the Iris data set. Note that in this case, the entire data set is considered as being one instance.
Initialise the Sparkle platform
sparkle initialise
Add instances
sparkle add_instances Examples/Resources/Instances/Iris
Add solver
sparkle add_solver Examples/Resources/Solvers/RandomForest
Configure the solver on the data set
sparkle configure_solver --solver RandomForest --instance-set-train Iris --objectives accuracy:max
sparkle wait
Validate the performance of the best found parameter configuration. The test set is optional.
sparkle validate_configured_vs_default --solver RandomForest --instance-set-train Iris --objectives accuracy:max
Generate a report
Wait for validation to be completed
sparkle wait
Generate a report detailing the results on the training (and optionally testing) set.
sparkle generate_report --objectives accuracy:max
Running a Parallel Portfolio
In this tutorial we will measure the runtime performance of several algorithms in parallel. The general idea is that we consider the algorithms as a portfolio that we run in parallel (hence the name) and terminate all running algorithms once a solution is found.
Initialise the Sparkle platform
sparkle initialise
Add instances
First we add the instances to the platform that we want to use for our experiment. Note that if our instance set contains multiple instances, the portfolio will attempt to run them all in parallel. Note that you should use the full path to the directory containing the instance(s)
sparkle add_instances Examples/Resources/Instances/PTN/
Add solvers
Now we can add our solvers to the portfolio that we want to “race” in parallel against eachother.
The path used should be the full path to the solver directory and should contain the solver executable and the sparkle_solver_wrapper
wrapper. It is always a good idea to keep the amount of files in your solver directory to a minimum.
sparkle add_solver Examples/Resources/Solvers/CSCCSat/
sparkle add_solver Examples/Resources/Solvers/MiniSAT/
sparkle add_solver Examples/Resources/Solvers/PbO-CCSAT-Generic/
Run the portfolio
By running the portfolio a list of jobs will be created which will be executed by the cluster.
Use the --cutoff-time
option to specify the maximal time for which the portfolio is allowed to run.
add --portfolio-name
to specify a portfolio otherwise it will select the last constructed portfolio
The --instance-path
option must be a path to a single instance file or an instance set directory.
For example --instance-path Instances/Instance_Set_Name/Single_Instance
.
If your solvers are non-deterministic (e.g. the random seed used to start your algorithm can have an impact on the runtime), you can set the amount of jobs that should start with a random seed per algorithm. Note that scaling up this variable has a significant impact on how many jobs will be run (Number of instances * number of solvers * number of seeds). We can set using the --solver-seeds
argument followed by some positive integer.
sparkle run_parallel_portfolio --instance-path Instances/PTN/ --portfolio-name runtime_experiment
Generate the report
The report details the experimental procedure and performance information.
This will be located at Output/Parallel_Portfolio/Sparkle_Report.pdf
sparkle generate_report
Algorithm Selection
Sparkle also offers various tools to apply algorithm selection, where we, given an objective, train another algorithm to determine which solver is best to use based on an instance.
These steps can also be found as a Bash script in Examples/selection.sh
Initialise the Sparkle platform
sparkle initialise
Add instances
First, we add instance files (in this case in CNF format) to the platform by specifying the path.
sparkle add instances Examples/Resources/Instances/PTN/
Add solvers
Now we add solvers to the platform as possible options for our selection. Each solver directory should contain the solver wrapper.
sparkle add solver Examples/Resources/Solvers/CSCCSat/
sparkle add solver Examples/Resources/Solvers/PbO-CCSAT-Generic/
sparkle add solver Examples/Resources/Solvers/MiniSAT/
Add feature extractor
To run the selector, we need certain features to represent our instances. To that end, we add a feature extractor to the platform that creates vector representations of our instances.
sparkle add feature extractor Examples/Resources/Extractors/SAT-features-competition2012_revised_without_SatELite_sparkle/
Compute features
Now we can run our features with the following command:
sparkle compute features
Run the solvers
Similarly, we can now also compute our objective values for our solvers, in this case PAR10. Note that we can at this point still specifiy multiple objectives by separating them with a comma, or denote them in our settings file.
sparkle run solvers --objective PAR10
Construct a portfolio selector
To make sure feature computation and solver performance computation are done before constructing the portfolio use the wait
command
sparkle wait
Now we can construct a portfolio selector, using the previously computed features and the results of running the solvers. The --selector-timeout
argument determines for how many seconds we will train our selector for. We can set the flag --solver-ablation
for actual marginal contribution computation later.
sparkle construct portfolio selector --selector-timeout 1000 --solver-ablation
sparkle wait # Wait for the constructor to complete its computations
Generate a report
Generate an experimental report detailing the experimental procedure and performance information; this will be located at Output/Selection/Sparkle_Report.pdf
sparkle generate report
Run the portfolio selector
Run on a single instance
Run the portfolio selector on a single testing instance; the result will be printed to the command line if you add --run-on local
to the command.
sparkle run portfolio selector Examples/Resources/Instances/PTN2/plain7824.cnf
Run on an instance set
Run the portfolio selector on a testing instance set
sparkle run portfolio selector Examples/Resources/Instances/PTN2/
sparkle wait # Wait for the portfolio selector to be done running on the testing instance set
Generate a report including results on the test set
Generate an experimental report that includes the results on the test set, and as before the experimental procedure and performance information; this will be located at Output/Selection/Sparkle_Report_For_Test.pdf
sparkle generate report
By default the generate_report
command will create a report for the most recent instance set. To generate a report for an older instance set, the desired instance set can be specified with: --test-case-directory Test_Cases/PTN2/
Comparing against SATZilla 2024
If you wish to compare two feature extractors against one another, you need to remove the previous extractor from the platform (Or create a new platform from scratch) by running:
sparkle remove feature extractor SAT-features-competition2012_revised_without_SatELite_sparkle
Otherwise, Sparkle will interpret adding the other feature extractor as creating a combined feature vector per instance from all present extractors in Sparkle. Now we can add SATZilla 2024 from the Examples directory Note that this feature extractor requires GCC (any version, tested with 13.2.0) to run.
sparkle add feature extractor Examples/Resources/Extractors/SAT-features-competition2024
We can also investigate a different data set, SAT Competition 2023 for which Sparkle has a subset.
sparkle remove instances PTN
sparkle add instances Examples/Resources/Instances/SATCOMP2023_SUB
We compute the features for the new extractor and new instances.
sparkle compute features
sparkle wait # Wait for it to complete before continuing
And run the solvers on the new data set.
sparkle run solvers
sparkle wait
Now we can train a selector based on these features.
sparkle construct portfolio selector --selector-timeout 1000
sparkle wait #Wait for the computation to be done
And generate the report. When running on the PTN/PTN2 data sets, you can compare the two to see the impact of different feature extractors.
sparkle generate report
Algorithm selection with multi-file instances
We can also run Sparkle on problems with instances that use multiple files. In this tutorial we will perform algorithm selection on instance sets with multiple files.
Initialise the Sparkle platform
sparkle initialise
Add instances
Add instance files in a given directory, without running solvers or feature extractors yet. In addition to the instance files, the directory should contain a file sparkle_instance_list.txt
where each line contains a space separated list of files that together form an instance.
sparkle add_instances Examples/Resources/CCAG/Instances/CCAG/
Add solvers
Add solvers (here for the constrained covering array generation (CCAG) problem) with a wrapper containing the executable name of the solver and a string of command line parameters, without running the solvers yet
Each solver directory should contain the solver executable and a wrapper
sparkle add_solver Examples/Resources/CCAG/Solvers/TCA/
sparkle add_solver Examples/Resources/CCAG/Solvers/FastCA/
Add feature extractor
Similarly, add a feature extractor, without immediately running it on the instances
sparkle add_feature_extractor Examples/Resources/CCAG/Extractors/CCAG-features_sparkle/
Compute features
Compute features for all the instances
sparkle compute_features
Run the solvers
Run the solvers on all instances. For the CCAG (Constrained Covering Array Generation) problem we measure the quality objective by setting the --objectives
option, to avoid needing this for every command it can also be set in Settings/sparkle_settings.ini
.
sparkle run_solvers --objectives quality
Construct a portfolio selector
To make sure feature computation and solver performance computation are done before constructing the portfolio use the wait
command
sparkle wait
Construct a portfolio selector, using the previously computed features and the results of running the solvers. We again set the objective measure to quality.
sparkle construct_portfolio_selector --objectives quality
Running the selector
Run on a single instance
Run the portfolio selector on a single testing instance; the result will be printed to the command line if you add --run-on local
to the command. We again set the objective to quality.
sparkle run_portfolio_selector Examples/Resources/CCAG/Instances/CCAG2/Banking2.model Examples/Resources/CCAG/Instances/CCAG2/Banking2.constraints --objectives quality
Run on an instance set
Run the portfolio selector on a testing instance set. We again set the objective to quality.
sparkle run_portfolio_selector Examples/Resources/CCAG/Instances/CCAG2/ --objectives quality
Generate a report including results on the test set
Wait for the portfolio selector to be done running on the testing instance set
sparkle wait
Generate an experimental report that includes the results on the test set, and as before the experimental procedure and performance information; this will be located at Components/Sparkle-latex-generator/Sparkle_Report_For_Test.pdf
. We again set the obejctive to quality.
sparkle generate_report --objectives quality`
By default the generate_report
command will create a report for the most recent instance set. To generate a report for an older instance set, the desired instance set can be specified with: --test-case-directory Test_Cases/CCAG2/