Benchmarkers

Benchmarkers

Benchmarkers are players in TIG who continuously select algorithms to compute solutions for challenges and submit them to TIG through precommits, benchmarks and proofs to earn block rewards.

Computing Solutions

The process of benchmarking comprises 3 steps:

Precommit

TIG requires Benchmarkers to commit to their settings and a number of nonces before starting every benchmark.

By imposing a limit of 30 unresolved benchmarks (benchmarks with no proofs or flagged as fraud), TIG ensures Benchmarkers are executing their selected algorithms.

Selecting benchmark settings

A Benchmarker must select their settings, comprising 5 fields, before benchmarking can begin:

  • Player ID: The address of the Benchmarker. This prevents fraudulent re-use of solutions computed by another Benchmarker.

  • Challenge ID: It identifies the proof-of-work challenge for which the Benchmarker is attempting to compute solutions. The challenge must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on minimising their imbalance. Note: Imbalance minimisation is the default strategy for the browser benchmarker.

  • Algorithm ID: It is the proof-of-work algorithm that the Benchmarker wants to use to compute solutions. The algorithm must be flagged as active in the referenced block. Benchmarkers are incentivised to make their selection based on the algorithm’s performance in computing solutions.

  • Block ID: A reference block from which the lifespan of the solutions begins counting down. Benchmarkers are incentivised to reference the latest block as to maximise the remaining lifespan of any computed solutions.

  • Difficulty: The difficulty of the challenge instances for which the Benchmarker is attempting to compute solutions. The difficulty must lie within the valid range of the challenge for the referenced block. Benchmarkers are incentivised to make their selection to strike a balance between the number of blocks for which their solution will remain a qualifier, and the number of solutions they can compute. (e.g. lower difficulty may mean more solutions, but may lower the number of blocks that the solutions remain qualifiers)

Committing to a number of nonces

A Benchmarker must commit to their selected settings and a number of nonces to prevent changes during benchmarking.

Critically, if a Benchmarker uses a different algorithm from the one they committed to, this can be detected as the settings/nonces allows anyone to recompute the benchmark.

When picking a number of nonces, a Benchmarker should consider their amount of compute, their selected algorithm, and difficulty.

Benchmark

After a precommit is confirmed, the protocol assigns it a random hash. A Benchmarker can then start their benchmark by iterating over their nonces, generating a unique seed & challenge instance, and executing their selected algorithm.

Generating Unpredictable Challenge Instances

Benchmarkers generate challenge instances by:

  1. Generating a seed by hashing the random hash, their settings and nonce. This ensures unpredictability

  2. Pseudo-Random Number Generators (PRNG) are seeded before generating the challenge instance

This process is deterministic, allowing anyone to regenerate the same unpredictable instance.

Challenges are designed such that the number of possible instances is sufficiently large that is it intractable for Benchmarkers to store solutions for potential re-use. This means they are forced to execute their selected algorithm.

Verifiable Execution

See this section

Generating Merkle Root

TIG requires Benchmarkers to hash the output data from the verifiable execution, before building a Merkle tree from hashes corresponding to nonces 0 - N-1 as leaves.

Benchmarkers only submit to the protocol the Merkle root along with a list of nonces for which the claim to have solutions.

This minimises the amount of data that gets submitted.

Proofs

To verify the number of solutions a Benchmarker has submitted, TIG requires Benchmarkers to submit Merkle proofs.

Probabilistic Verification

When a benchmark is confirmed, TIG randomly samples up to 3 solution nonces and up to 3 other nonces (max of 6) for which the Benchmarker must submit Merkle proofs.

This random sampling makes it irrational for Benchmarkers to fraudulently “pad” a benchmark with fake solutions:

If a Benchmarker computes N solutions, and pads M fraudulent solutions to the benchmark for a total of N+MN + M solutions, then the chance of their fraud being undetected is NN+M\frac{N}{N+M} for each sample. With multiple samples, the fraud is likely to be detected, wasting the real work done to compute the N solutions.

For example, if 50% of the solutions are fraud, with 3 samples, the chance of the fraud being detected is 87.5%.

Generating Merkle Proof

A Benchmarker must collate and submit a Merkle proof for each sampled nonce, consisting of the output data for that nonce along with the Merkle branch.

Submission Delay & Lifespan mechanism

Upon confirmation of a proof, a submission delay D is determined based on the block gap between when the benchmark started and when its proof was confirmed.

Any solutions for a benchmark only become “active” (eligible to earn rewards) from block X + D x 1.2, where X is the block when the proof is confirmed.

TIG incentivises Benchmarkers to make submissions as soon as possible by imposing a lifespan of 120 blocks on benchmarks. This lifespan starts counting down from when the benchmark is started, after which solutions can no longer be active.