BenchmarkersSolution Discard

Benchmark Solution Discard Mechanism

After completing Proof-of-Work, Benchmarkers must discard some of their solutions before submission. This is a two-component process, it ensures network stability and incentivizes reliable algorithms:

  1. Solution Rate Discarding - Maintains target solution rates across the network
  2. Reliability Discarding - Rewards benchmarkers who use consistent, high-quality algorithms

Solution Rate Control

The protocol maintains network stability by targeting a specific solution rate for each challenge. This is achieved through a dynamic hash threshold that filters solutions - if too many solutions are being produced, the threshold is lowered to increase discarding; if too few, it’s raised to decrease discarding.

For each challenge at block tt, the protocol:

  1. Calculates current_rate(t){current\_rate}(t) by averaging the solution rate of the previous 10 blocks, the solution rate of a block is the number of solutions that become active in that block.
  2. Adjusts the hash threshold to steer this rate toward the target:

target_threshold=hash_threshold(t1)×targetcurrent_rate(t)\text{target\_threshold} = \text{hash\_threshold}(t-1) \times \frac{\text{target}}{\text{current\_rate}(t)}

For example, if the current rate is double the target, this formula will halve the threshold to increase discarding.

To ensure network stability, changes to the hash threshold are limited to ±0.0025 per block:


error=max(min(target_thresholdhash_threshold(t1),0.0025),0.0025)\text{error} = \max\left(\min\left(\text{target\_threshold} - \text{hash\_threshold}(t-1), 0.0025\right), -0.0025\right)

The hash threshold is then set to:


hash_threshold(t)=min(1,hash_threshold(t1)+error)\text{hash\_threshold}(t) = \min(1,\text{hash\_threshold}(t-1) + \text{error})

This gradual adjustment prevents sudden changes in discarding behavior while allowing the system to steadily converge toward the target solution rate. See Benchmark Discarding for how Benchmarkers apply this threshold.

Reliability

Reliability incentivizes Benchmarkers to use sophisticated algorithms rather than simple greedy approaches. The protocol calculates reliability scores for each Benchmarker based on their solution ratios compared to the network average. For each challenge at each block, we first calculate individual solution ratios for each Benchmarker ii:


solution_ratioi=solutionsinoncesi\text{solution\_ratio}_i=\frac{\text{solutions}_i}{\text{nonces}_i}

where solutionsi{solutions}_i is the total number of solutions and noncesi{nonces}_i is the total number of nonces across all of Benchmarker ii‘s active benchmarks. A weighted average solution ratio for this block is then set as


weighted_average_sol_ratio=iqualifiersijqualifiersjsolution_ratioi\text{weighted\_average\_sol\_ratio}= \sum_i \frac{\text{qualifiers}_i}{\sum_j \text{qualifiers}_j}\cdot \text{solution\_ratio}_i

where qualifiersi{qualifiers}_i is the number of Benchmarker ii‘s solutions that qualified for rewards this block.

During a benchmark, after a Benchmarker does the proof of work they calculate their solution ratio of that benchmark benchmark_sol_ratio{benchmark\_sol\_ratio}, they then calculate the reliability of the benchmark as


reliability=benchmark_sol_ratioweighted_average_sol_ratio\text{reliability}=\frac{\text{benchmark\_sol\_ratio}}{\text{weighted\_average\_sol\_ratio}}

where the weighted_average_sol_ratio{weighted\_average\_sol\_ratio} is with respect to the weighted average solution ratio of the reference block for the benchmark. A reliability score above 1.0 indicates above-average performance, while below 1.0 indicates below-average performance. This score directly affects how many solutions qualify for rewards - see Benchmark Discarding.

Benchmark Discarding

After completing proof-of-work for a benchmark, a Benchmarker splits their nonces into three sets non_solutions{non\_solutions}, discarded_solutions{discarded\_solutions} and solutions{solutions}. This is done by:

  1. Initially placing all nonces that produced solutions into the solutions{solutions} set
  2. Placing all nonces that didn’t produce solutions into the non_solutions{non\_solutions} set
  3. Applying the following discarding mechanism to the solutions{solutions} set:

First, they take the hash_threshold{hash\_threshold} from their reference block (determined by Solution Rate Control) and multiply it by their benchmark’s reliability{reliability} score (calculated as described in Reliability):


effective_threshold=reliabilityhash_threshold\text{effective\_threshold}=\text{reliability}\cdot\text{hash\_threshold}

This effective threshold determines which solutions can qualify for rewards. For each ss in solutions\text{solutions}:

  • If merkle_hashs>effective_threshold{merkle\_hash}_s>{effective\_threshold}: Move to discarded_solutions{discarded\_solutions}
  • If merkle_hashseffective_threshold{merkle\_hash}_s\leq{effective\_threshold}: Keep in solutions{solutions}

Important Notes:

  • Discarded solutions still count when calculating solution ratios and reliability scores
  • Only solutions in solutions{solutions} (not discarded) can qualify for rewards
  • Higher reliability scores allow benchmarkers to keep more solutions