Benchmark Proof of Work
Benchmarkers interact with TIG by performing proof-of-work through benchmarks, which are foundational to the TIG protocol. Proof of work in a benchmark can be broken down into three stages:
- Precommit - Benchmarkers first commit to their benchmark settings.
- Compute - Benchmarkers then receive a random hash from the protocol to generate challenge instances and use their compute to find solutions to the challenges.
- Proof Submission - Benchmark proofs are submitted and enter the protocol lifecycle.
Precommit
TIG requires Benchmarkers to commit to their settings and nonces before starting any benchmark. This prevents manipulation and ensures fair computation.
Selecting benchmark settings
During precommit, a Benchmarker must select their settings, comprising 5 fields, before benchmarking can begin:
-
Player ID: The address of the Benchmarker. This prevents fraudulent re-use of solutions computed by another Benchmarker.
-
Challenge ID: It identifies the proof-of-work challenge for which the Benchmarker is attempting to compute solutions. The challenge must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on minimising their imbalance. Note: Imbalance minimisation is the default strategy for the browser benchmarker.
-
Algorithm ID: It is the proof-of-work algorithm that the Benchmarker wants to use to compute solutions. The algorithm must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on the algorithm’s performance in computing solutions.
-
Block ID: A reference block from which the lifespan of the solutions begins counting down. Benchmarkers are incentivised to reference the latest block as to maximise the remaining lifespan of any computed solutions.
-
Difficulty: The difficulty of the challenge instances for which the Benchmarker is attempting to compute solutions. The difficulty must lie within the valid range of the challenge for the referenced block. Benchmarkers are incentivised to make their selection to strike a balance between the number of blocks for which their solution will remain a qualifier, and the number of solutions they can compute. (e.g. lower difficulty may mean more solutions, but may lower the number of blocks that the solutions remain qualifiers)
Committing to a number of nonces
A Benchmarker must commit to their selected settings and a number of nonces to prevent changes during benchmarking.
Critically, if a Benchmarker uses a different algorithm from the one they committed to, this can be detected as the settings/nonces allows anyone to recompute the benchmark.
When picking a number of nonces, a Benchmarker should consider their amount of compute, their selected algorithm, and difficulty.
Compute
After a precommit is confirmed, the protocol assigns it a random hash. A Benchmarker can then start their benchmark by iterating over their nonces, generating a unique seed & challenge instance, and executing their selected algorithm.
Generating Unpredictable Challenge Instances
Benchmarkers generate challenge instances by:
-
Generating a seed by hashing the random hash, their settings and nonce. This ensures unpredictability.
-
Pseudo-Random Number Generators (PRNG) are seeded before generating the challenge instance.
This process is deterministic, allowing anyone to regenerate the same unpredictable instance.
Challenges are designed such that the number of possible instances is sufficiently large that is it intractable for Benchmarkers to store solutions for potential re-use. This means they are forced to execute their selected algorithm.
Solution Discarding
After computing all nonces, Benchmarkers apply TIG’s solution rate and reliability discarding mechanisms. This process categorizes nonces into three distinct sets:
- Solutions: Nonces that produced valid solutions and were not discarded.
- Discarded Solutions: Nonces that found valid solutions but were discarded.
- Non-Solutions: Nonces that did not produce valid solutions.
This filtering mechanism serves two key purposes:
- Rate Control: Maintains a reasonable flow of solutions into the protocol.
- Quality Assurance: Accounts for the reliability of different algorithms.
For detailed information about this discarding mechanism, see Solution Discarding.
Proof Submission
Once the computational work is completed, Benchmarkers begin the proof submission process:
-
Submit Benchmark Results: Benchmarkers report to the protocol which nonces fall into each category:
- Solutions
- Discarded Solutions
- Non-Solutions
-
Protocol Sampling: After the benchmark submission is confirmed, the TIG protocol randomly samples specific nonces from the benchmark for verification.
-
Generate Merkle Proofs: Benchmarkers must then generate merkle proofs for the randomly selected nonces, enabling efficient probabilistic verification.
-
Submit Proofs: The merkle proofs are submitted to complete the benchmark process. Once Benchmarkers submit their proofs, their direct involvement in the benchmark ends.