Sparkle
A Programming by Optimisation (PbO)-based problem-solving platform designed to enable the widespread and effective use of PbO techniques for improving the state-of-the-art in solving a broad range of prominent AI problems, including SAT and AI Planning.
Specifically, Sparkle facilitates the use of:
Automated algorithm configuration
Automated algorithm selection
Furthermore, Sparkle handles various tasks for the user such as:
Algorithm meta information collection and statistics calculation
Instance/Data Set management and feature extraction
Compute cluster job submission and monitoring
Log file collection
Installation
Sparkle is a Python based package, but required several non-Python dependencies to run fully. The easiest installation is through Conda. A setup with Python virtual Environment is also possible, but requires more user input for the installation process.
Conda
The quick and full installation of Sparkle can be done using Conda (For Conda installation see here).
Simply download the environment.yml
file from the Github with wget:
wget https://raw.githubusercontent.com/ADA-research/Sparkle/main/environment.yml
and run:
conda env create -f environment.yml
The installation of the environment may take up to five minutes depending on your internet connection. Once the environment has been created it can be activated by:
conda activate sparkle
Note
The creation of the Conda environment also takes care of the installation of the Sparkle package itself.
Note
You will need to reactivate the environment every time you start the terminal, before using Sparkle.
venv
Sparkle can also be installed as a standalone package using Pip. We recommend creating a new virtual environment with venv before to ensure no clashes between dependencies occur. Note that when creating a new venv, Sparkle needs Python 3.10 to run, so create your virtual environment with this Python version active
To install Sparkle in the virtual environment simply type:
pip install sparkle
Note that a direct installation through Pip does not handle certain dependencies of the Sparkle CLI, such as the required libraries for compiling RunSolver.
You will need to supply, asside from the other dependencies in the next section, the following in your virtual environment:
Python 3.10
is required to use Sparklelibnuma
andnumactl
in order to compile RunSolver. We suggest to useGCC 12.2.0
.
Bash autocomplete
If you wish for the Bash autocomplete to also work for Sparkle’s CLI commands, you can type sparkle install autocomplete
, which will append a single line of code to your .bash_profile
allowing you to auto complete any sparkle command with the tab.
Dependencies
Asside from several package dependencies, Sparkle’s package / CLI relies on a few user supplied executables:
LaTex
compiler (pdflatex) for report generationJava
, tested with version 1.8.0_402, in order to use SMAC2R
4.3.1, in order to use IRACE
Other dependencies are handled by the Conda environment, but if that is not an option for you please ensure you have the following:
libnuma and numactl for Runsolver compilation which sparkle uses to measure solvers meta data. This is restricted to Linux based systems.
For detailed installation instructions see the documentation: https://ada-research.github.io/Sparkle/
Developer installation
The file dev-env.yml
is used for developer mode of the Sparkle package and contains several extra packages for testing.
The two environments can be created in parallel since one is named sparkle
and the other sparkle-dev
. If you want to update an environment it is better to do a clean installation by removing and recreating it. For example:
conda deactivate
conda env remove -n sparkle
conda env create -f environment.yml
conda activate sparkle
This should be fast as both conda
and pip
use local cache for the packages.
Examples
See the Examples
directory for some examples on how to use Sparkle
. All Sparkle CLI commands need to be executed from the root of the initialised Sparkle directory.
Documentation
The documentation can be read at https://ada-research.github.io/Sparkle/.
A PDF
is also available in the repository.
Licensing
Sparkle is distributed under the MIT licence
Component licences
Sparkle is distributed with a number of external components, solvers, and instance sets. Descriptions and licensing information for each these are included in the sparkle/Components
and Examples/Resources/
directories.
The SATzilla 2012 feature extractor is used from http://www.cs.ubc.ca/labs/beta/Projects/SATzilla/
with some modifications. The main modification of this component is to disable calling the SAT instance preprocessor called SatELite. It is located in: Examples/Resources/Extractors/SAT-features-competition2012_revised_without_SatELite/
Citation
If you use Sparkle for one of your papers and want to cite it, please cite our paper describing Sparkle: K. van der Blom, H. H. Hoos, C. Luo and J. G. Rook, Sparkle: Toward Accessible Meta-Algorithmics for Improving the State of the Art in Solving Challenging Problems, in IEEE Transactions on Evolutionary Computation, vol. 26, no. 6, pp. 1351-1364, Dec. 2022, doi: 10.1109/TEVC.2022.3215013.
@article{BloEtAl22,
title={Sparkle: Toward Accessible Meta-Algorithmics for Improving the State of the Art in Solving Challenging Problems},
author={van der Blom, Koen and Hoos, Holger H. and Luo, Chuan and Rook, Jeroen G.},
journal={IEEE Transactions on Evolutionary Computation},
year={2022},
volume={26},
number={6},
pages={1351--1364},
doi={10.1109/TEVC.2022.3215013}
}
Maintainers
Thijs Snelleman, Jeroen Rook, Hadar Shavit, Holger H. Hoos,
Contributors
Chuan Luo, Richard Middelkoop, Jérémie Gobeil, Sam Vermeulen, Marcel Baumann, Jakob Bossek, Tarek Junied, Yingliu Lu, Malte Schwerin, Aaron Berger, Marie Anastacio, Aaron Berger Koen van der Blom, Noah Peil, Brian Schiller, Emir Pisiciri
Contact
sparkle@aim.rwth-aachen.de
Sponsors
The development of Sparkle is partially sponsored by the Alexander von Humboldt foundation.
Package Modules
- AblationScenario
- Callable
- ConfigurationScenario
- Configurator
- Extractor
- FeatureDataFrame
- FeatureGroup
- FeatureSubgroup
- FeatureType
- FileInstanceSet
- InstanceSet
- Instance_Set
- IterableFileInstanceSet
- MultiFileInstanceSet
- Option
- PCSConverter
- Path
- PerformanceDataFrame
- RunSolver
- SATVerifier
- SelectionScenario
- Selector
- Settings
- SlurmBatch
- SolutionVerifier
- Solver
- SolverStatus
- SparkleCallable
- SparkleObjective
- UseTime
- about
- cli_types
- configspace
- configurator
AblationScenario
AblationScenario.check_for_ablation()
AblationScenario.check_requirements()
AblationScenario.create_configuration_file()
AblationScenario.create_instance_file()
AblationScenario.create_scenario()
AblationScenario.download_requirements()
AblationScenario.from_file()
AblationScenario.read_ablation_table()
AblationScenario.scenario_dir
AblationScenario.submit_ablation()
AblationScenario.table_file
AblationScenario.tmp_dir
AblationScenario.validation_dir
AblationScenario.validation_dir_tmp
ConfigurationScenario
ConfigurationScenario.ablation_scenario
ConfigurationScenario.configuration_ids
ConfigurationScenario.configurator
ConfigurationScenario.create_scenario()
ConfigurationScenario.create_scenario_file()
ConfigurationScenario.directory
ConfigurationScenario.find_scenario()
ConfigurationScenario.from_file()
ConfigurationScenario.name
ConfigurationScenario.results_directory
ConfigurationScenario.scenario_file_path
ConfigurationScenario.serialise()
ConfigurationScenario.timestamp
ConfigurationScenario.tmp
ConfigurationScenario.validation
Configurator
- extractor
- feature_dataframe
- features
- general
- get_solver_call_params
- get_time_pid_random_string
- implementations
- importlib
- inspect
- instance
- instances
- objective
- objective_string_regex
- objective_variable_regex
- parameters
- performance_dataframe
- platform
Option
Settings
Settings.ablation_max_parallel_runs_per_node
Settings.ablation_racing_flag
Settings.appendices
Settings.apply_arguments()
Settings.check_settings_changes()
Settings.configurator
Settings.configurator_max_iterations
Settings.configurator_number_of_runs
Settings.configurator_solver_call_budget
Settings.extractor_cutoff_time
Settings.get_configurator_output_path()
Settings.get_configurator_settings()
Settings.irace_first_test
Settings.irace_max_experiments
Settings.irace_max_iterations
Settings.irace_max_time
Settings.irace_mu
Settings.minimum_marginal_contribution
Settings.objectives
Settings.parallel_portfolio_check_interval
Settings.parallel_portfolio_num_seeds_per_solver
Settings.paramils_cli_cores
Settings.paramils_cpu_time_budget
Settings.paramils_focused_approach
Settings.paramils_max_iterations
Settings.paramils_max_runs
Settings.paramils_min_runs
Settings.paramils_number_initial_configurations
Settings.paramils_random_restart
Settings.paramils_use_cpu_time_in_tunertime
Settings.read_settings_ini()
Settings.run_on
Settings.sbatch_settings
Settings.seed
Settings.selection_class
Settings.selection_model
Settings.slurm_job_prepend
Settings.slurm_jobs_in_parallel
Settings.smac2_cli_cores
Settings.smac2_cpu_time_budget
Settings.smac2_max_iterations
Settings.smac2_target_cutoff_length
Settings.smac2_use_tunertime_in_cpu_time_budget
Settings.smac2_wallclock_time_budget
Settings.smac3_cpu_time_budget
Settings.smac3_crash_cost
Settings.smac3_facade
Settings.smac3_facade_max_ratio
Settings.smac3_max_budget
Settings.smac3_min_budget
Settings.smac3_number_of_trials
Settings.smac3_termination_cost_threshold
Settings.smac3_use_default_config
Settings.smac3_wallclock_time_budget
Settings.solver_cutoff_time
Settings.verbosity_level
Settings.write_settings_ini()
Settings.write_used_settings()
- re
- resolve_objective
- runsolver
- selector
SelectionScenario
SelectionScenario.create_scenario()
SelectionScenario.create_scenario_file()
SelectionScenario.from_file()
SelectionScenario.instance_sets
SelectionScenario.serialise()
SelectionScenario.solvers
SelectionScenario.test_instance_sets
SelectionScenario.test_instances
SelectionScenario.training_instance_sets
SelectionScenario.training_instances
Selector
- settings_objects
- slurm_parsing
- selector
Extractor
SelectionScenario
SelectionScenario.create_scenario()
SelectionScenario.create_scenario_file()
SelectionScenario.from_file()
SelectionScenario.instance_sets
SelectionScenario.serialise()
SelectionScenario.solvers
SelectionScenario.test_instance_sets
SelectionScenario.test_instances
SelectionScenario.training_instance_sets
SelectionScenario.training_instances
Selector
- solver
- solver_wrapper_parsing
- sparkle_callable
- status
- structures
FeatureDataFrame
FeatureDataFrame.add_extractor()
FeatureDataFrame.add_instances()
FeatureDataFrame.extractors
FeatureDataFrame.features
FeatureDataFrame.get_feature_groups()
FeatureDataFrame.get_instance()
FeatureDataFrame.get_value()
FeatureDataFrame.has_missing_value()
FeatureDataFrame.has_missing_vectors()
FeatureDataFrame.impute_missing_values()
FeatureDataFrame.instances
FeatureDataFrame.num_features
FeatureDataFrame.remaining_jobs()
FeatureDataFrame.remove_extractor()
FeatureDataFrame.remove_instances()
FeatureDataFrame.reset_dataframe()
FeatureDataFrame.save_csv()
FeatureDataFrame.set_value()
FeatureDataFrame.sort()
PerformanceDataFrame
PerformanceDataFrame.add_configuration()
PerformanceDataFrame.add_instance()
PerformanceDataFrame.add_objective()
PerformanceDataFrame.add_runs()
PerformanceDataFrame.add_solver()
PerformanceDataFrame.best_configuration()
PerformanceDataFrame.best_instance_performance()
PerformanceDataFrame.best_performance()
PerformanceDataFrame.clean_csv()
PerformanceDataFrame.clone()
PerformanceDataFrame.configuration_ids
PerformanceDataFrame.configuration_performance()
PerformanceDataFrame.configurations
PerformanceDataFrame.filter_objective()
PerformanceDataFrame.get_configurations()
PerformanceDataFrame.get_full_configuration()
PerformanceDataFrame.get_instance_num_runs()
PerformanceDataFrame.get_job_list()
PerformanceDataFrame.get_solver_ranking()
PerformanceDataFrame.get_value()
PerformanceDataFrame.has_missing_values
PerformanceDataFrame.instances
PerformanceDataFrame.is_missing()
PerformanceDataFrame.marginal_contribution()
PerformanceDataFrame.mean()
PerformanceDataFrame.multi_objective
PerformanceDataFrame.num_instances
PerformanceDataFrame.num_objectives
PerformanceDataFrame.num_runs
PerformanceDataFrame.num_solver_configurations
PerformanceDataFrame.num_solvers
PerformanceDataFrame.objective_names
PerformanceDataFrame.objectives
PerformanceDataFrame.remove_configuration()
PerformanceDataFrame.remove_empty_runs()
PerformanceDataFrame.remove_instances()
PerformanceDataFrame.remove_objective()
PerformanceDataFrame.remove_runs()
PerformanceDataFrame.remove_solver()
PerformanceDataFrame.reset_value()
PerformanceDataFrame.run_ids
PerformanceDataFrame.save_csv()
PerformanceDataFrame.schedule_performance()
PerformanceDataFrame.set_value()
PerformanceDataFrame.solvers
PerformanceDataFrame.verify_indexing()
PerformanceDataFrame.verify_objective()
PerformanceDataFrame.verify_run_id()
- tools
- types
- verifiers