Optimising the Benchmarker Configuration
The Benchmarker configuration can be edited to include your api_key
, player_id
, api_url
and configure other benchmarker settings.
Editing the Configuration
The Benchmarker configuration can be edited after setting up the Master Node through the UI. Head over to the config page to edit the configuration to include your api_key
, player_id
, api_url
and configure the benchmarker settings.
The configuration is in json format and can be edited to include the following fields:
{
"player_id": "<YOUR_PLAYER_ID>",
"api_key": "<YOUR_API_KEY>",
"api_url": "<TESTNET_OR_MAINNET_API_URL>",
"job_manager_config": {
...
},
"slave_manager_config": {
...
},
"precommit_manager_config": {
...
},
"difficulty_sampler_config": {
...
},
"submissions_manager_config": {
...
}
}
Recommended Starting Point
Here are some key considerations for optimising your Benchmarker configuration if you are doing it for the first time:
-
Difficulty Range: Set the
difficulty_range
to sample a range of difficulties. A good starting point would be[0.0, 0.1]
. -
Batch Size: Set the
batch_size
to the number of threads of your slave(round down to the nearest power of 2).- Aim for a
batch_size
so that each batch takes around 15 - 30 seconds to compute. If you see the batches for a certain challenge taking less than 10 seconds to compute, then gradually increase thebatch_size
in powers of 2 (e.g. 16, 32, 64, 128 etc.). - Adjust the
num_nonces
accordingly based on the newbatch_size
.
Example: Here, the
batch_size
for the challengevehicle_routing
is set to 16 (withnum_nonces
= 80).config.json... "job_manager_config": { "batch_sizes": { "vehicle_routing": 16, // keep increasing the batch size (32, 64, 128, ...) so it takes 15 - 30 seconds to compute. ... } }, "slave_manager_config": { ... }, "precommit_manager_config": { ... "vehicle_routing": { "weight": 1, "algorithm": "advanced_routing", "num_nonces": 80, // adjust this based on the new batch size for the challenge "base_fee_limit": "10000000000000000" } }, ...
Checking the UI, each batch is taking around 4 seconds to compute.
To aim for around 20 seconds per batch, we apply a 4x multiplier and increase the
batch_size
to 64 andnum_nonces
to 320. - Aim for a
-
Number of Nonces: Set the
num_nonces
tobatch_size
X number of slaves. This is a good starting point because each slave needs to compute just 1 batch for the benchmark to be complete.- If you update the
batch_size
, also adjust thenum_nonces
accordingly.
- If you update the
-
Max Concurrent Batches: Set the
max_concurrent_batches
to 1 so that slaves only work on one batch at a time. This is an advanced setting to manage multiple different sized slaves.
player_id
, api_key
and api_url
The player_id
is the wallet address of your benchmarker, i.e, the address you used to obtain the api_key
and sign the message. To get the api_key
check the Benchmarker setup guide.
The api_url
is the URL of the API endpoint. The mainnet and testnet API URLs are:
https://mainnet-api.tig.foundation
difficulty_sampler_config
The difficulty_sampler_config
allows you to set the difficulty_range
for each challenge.
- Every block, each challenge recalculates its
base_frontier
andscaled_frontier
. - The difficulties within these 2 frontiers are “sorted” into easiest to hardest (0.0 is easiest, 1.0 is hardest).
- Benchmarkers can set the
difficulty_range
from which to sample a difficulty. Examples:[0.0, 1.0]
samples the full range of valid difficulties.[0.0, 0.1]
samples the easiest 10% of valid difficulties.
- Key consideration: Easier difficulties may result in more solutions given the same compute, but might not be a qualifier for long if the frontiers get harder.
job_manager_config
The job_manager_config
allows you to set the batch_size
for each challenge.
batch_size
is the number of nonces that are part of a batch. It must be a power of 2.- It is recommend to pick a
batch_size
for your slave with lowestnum_workers
such that it takes a few seconds to compute (e.g. 5 seconds). - The
batch_size
shouldn’t be too small, or else network latency betweenmaster
andslave
will affect performance. - To support slaves with different
num_workers
, checkslave_manager_config
below.
precommit_manager_config
The precommit_manager_config
allows you to control your benchmarks:
- The
max_pending_benchmarks
is the maximum number of pending benchmarks. - Key consideration: You want batches to always be available for your slaves, but at the same time if you submit benchmarks too slowly, it will have large delays before it will be active.
- The
num_nonces
is the number of nonces to compute per benchmark. It is recommend to adjust based on the logs which tells you the average number of nonces to find a solution. Example log:global qualifier difficulty stats for vehicle_routing: (#nonces: 43739782840, #solutions: 22376, avg_nonces_per_solution: 1954763)
.
- The
weight
affects how likely the challenge will be picked (weight of 0 will never be picked). It is recommend to adjust if the logs warns you to benchmark a specific challenge to increase your cutoff. Example log:recommend finding more solutions for challenge knapsack to avoid being cut off
.
slave_manager_config
The slave_manager_config
allows you to control your slaves:
- When a slave makes a request, the manager iterates through each slave config one at a time until it finds a regex match. The most specific regexes should be earlier in the list, and the more general regexes should be latter in the list.
- The
max_concurrent_batches
determines how many batches of that challenge a slave can fetch & process concurrently. You can set themax_concurrent_batches
to 1 for a slave to only work on one batch at a time(this is an advanced setting to manage multiple different sized slaves). - The
selected_challenges
is a whitelist of challenges that will be included in the benchmark. If you don’t want a slave to benchmark a specific challenge, remove its entry from the list. For example:This means that the master will only return batches of... "selected_challenges": [ "satisfiability", "vehicle_routing" ], ...
satisfiability
andvehicle_routing
challenges for those slaves.
Dynamically adjusting your Configuration
A Benchmarker’s block reward is determined by the number of qualifying solutions they can compute, and their imbalance penalty.
To maximise earnings, Benchmarkers are recommended to dynamically adjust their configuration. This may involve observing current difficulty frontiers and what other Benchmarkers are doing.
This script is an example of how to adjust configuration via code.