Benchmark Solution Discard Mechanism
After completing Proof-of-Work, Benchmarkers must discard some of their solutions before submission. This is a two-component process, it ensures network stability and incentivizes reliable algorithms:
- Solution Rate Discarding - Maintains target solution rates across the network
- Reliability Discarding - Rewards benchmarkers who use consistent, high-quality algorithms
Solution Rate Control
The protocol maintains network stability by targeting a specific solution rate for each challenge. This is achieved through a dynamic hash threshold that filters solutions - if too many solutions are being produced, the threshold is lowered to increase discarding; if too few, it’s raised to decrease discarding.
For each challenge at block , the protocol:
- Calculates by averaging the solution rate of the previous 10 blocks, the solution rate of a block is the number of solutions that become active in that block.
- Adjusts the hash threshold to steer this rate toward the target:
For example, if the current rate is double the target, this formula will halve the threshold to increase discarding.
To ensure network stability, changes to the hash threshold are limited to ±0.0025 per block:
The hash threshold is then set to:
This gradual adjustment prevents sudden changes in discarding behavior while allowing the system to steadily converge toward the target solution rate. See Benchmark Discarding for how Benchmarkers apply this threshold.
Reliability
Reliability incentivizes Benchmarkers to use sophisticated algorithms rather than simple greedy approaches. The protocol calculates reliability scores for each Benchmarker based on their solution ratios compared to the network average. For each challenge at each block, we first calculate individual solution ratios for each Benchmarker :
where is the total number of solutions and is the total number of nonces across all of Benchmarker ‘s active benchmarks. A weighted average solution ratio for this block is then set as
where is the number of Benchmarker ‘s solutions that qualified for rewards this block.
During a benchmark, after a Benchmarker does the proof of work they calculate their solution ratio of that benchmark , they then calculate the reliability of the benchmark as
where the is with respect to the weighted average solution ratio of the reference block for the benchmark. A reliability score above 1.0 indicates above-average performance, while below 1.0 indicates below-average performance. This score directly affects how many solutions qualify for rewards - see Benchmark Discarding.
Benchmark Discarding
After completing proof-of-work for a benchmark, a Benchmarker splits their nonces into three sets , and . This is done by:
- Initially placing all nonces that produced solutions into the set
- Placing all nonces that didn’t produce solutions into the set
- Applying the following discarding mechanism to the set:
First, they take the from their reference block (determined by Solution Rate Control) and multiply it by their benchmark’s score (calculated as described in Reliability):
This effective threshold determines which solutions can qualify for rewards. For each in :
- If : Move to
- If : Keep in
Important Notes:
- Discarded solutions still count when calculating solution ratios and reliability scores
- Only solutions in (not discarded) can qualify for rewards
- Higher reliability scores allow benchmarkers to keep more solutions