Benchmark Proof of Work
Benchmarkers interact with TIG by performing proof-of-work through benchmarks, which are foundational to the TIG protocol. Benchmarks consist of nonces—random instances of challenges that benchmarkers solve. Within a benchmark, nonces are grouped into bundles, each containing a fixed number of nonces. Algorithm performance is evaluated by its average performance across each bundle. Bundle size (the number of nonces per bundle) is defined per challenge.
Proof of work in a benchmark can be broken down into three stages:
- Precommit - Benchmarkers first commit to their benchmark settings.
- Compute - Benchmarkers then receive a random hash from the protocol to generate challenge instances and use their compute to find solutions to the challenges.
- Proof Submission - Benchmark proofs are submitted and enter the protocol lifecycle.
Precommit
TIG requires Benchmarkers to commit to their settings and nonces before starting any benchmark. This prevents manipulation and ensures fair computation.
Selecting benchmark settings
During precommit, a Benchmarker must select their settings (see Getting Started), the fields that require careful consideration are
-
Challenge ID: It identifies the proof-of-work challenge for which the Benchmarker is attempting to compute solutions. The challenge must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on minimising their imbalance. Note: Imbalance minimisation is the default strategy for the browser benchmarker.
-
Algorithm ID: It is the proof-of-work algorithm that the Benchmarker wants to use to compute solutions. The algorithm must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on the algorithm’s performance in computing solutions.
-
Hyperparameters : The algorithm-specific configuration settings that shape how a chosen proof-of-work algorithm explores the search space. Each algorithm defines its own set of available hyperparameters within its code. Properly tuning these values is essential for maximising performance: well-optimised hyperparameters can deepen the search process, improve solution quality, and as a result, increase the number of qualifying solutions produced per unit of compute.
-
Selected Track ID: The ID of the challenge track for which the Benchmarker is attempting to compute solutions.
-
Fuel : Fuel determines how long a benchmarker chooses to run the selected algorithm, up to a defined maximum. With variable fuel, benchmarkers can decide the duration of their benchmark based on their strategy: longer runs will yield higher-quality but will require more compute.
-
Number of bundles : A bundle is made up of a number of nonces, hence by committing to a number of bundles a benchmarker commits to a number of nonces. Each benchmark must contain atleast 4 bundles.
A Benchmarker must commit to their selected settings to prevent changes during benchmarking.
Critically, if a Benchmarker uses a different algorithm from the one they committed to, this can be detected as the settings/nonces allows anyone to recompute the benchmark.
Compute
After a precommit is confirmed, the protocol assigns it a random hash. A Benchmarker can then start their benchmark by iterating over their nonces, generating a unique seed & challenge instance, and executing their selected algorithm.
For each nonce, the algorithm execution results in one of two outcomes:
- Solution: The result satisfies the challenge’s constraints
- No solution: The result fails to meet the constraints, or the code execution fails
Each solution will have an attached quality score, higher quality solutions are more likely to earn rewards.
Examples of the quality metric for a solution are: the test error in the Neural Network challenge, or the % by which the route in the Capacitated Vehicle Routing challenge is shorter than a baseline route.
Generating Unpredictable Challenge Instances
Benchmarkers generate challenge instances by:
-
Generating a seed by hashing the random hash, their settings and nonce. This ensures unpredictability.
-
Pseudo-Random Number Generators (PRNG) are seeded before generating the challenge instance.
This process is deterministic, allowing anyone to regenerate the same unpredictable instance.
Challenges are designed such that the number of possible instances is sufficiently large that is it intractable for Benchmarkers to store solutions for potential re-use. This means they are forced to execute their selected algorithm.
Proof Submission
Once the computational work is completed, Benchmarkers begin the proof submission process:
-
Submit Benchmark Results: Benchmarkers report to the protocol if any of the nonces failed to find a solution (in which case the benchmark is discarded), if each nonce found a solution then they report the quality of each solution (see Compute).
-
Protocol Sampling: After the benchmark submission is confirmed, the TIG protocol randomly groups the nonces into bundles. Each bundle is assigned a quality score, defined as the average quality of its solutions. If a bundle’s quality is below a minimum threshold (set per challenge track), that bundle is discarded and will not proceed to verification. From the non-discarded bundles, TIG samples nonces for verification.
-
Generate Merkle Proofs: Benchmarkers must then generate merkle proofs for the randomly selected nonces, enabling efficient probabilistic verification.
-
Submit Proofs: The merkle proofs are submitted to complete the benchmark process. Once Benchmarkers submit their proofs, their direct involvement in the benchmark ends.