OPoW - Optimisable Proof of Work
TIG has developed a novel variant of proof-of-work called optimisable proof-of-work (OPoW).
Optimisable proof-of-work (OPoW) uniquely can integrate multiple proof-of-works, “binding” them in such a way that optimisations to the proof-of-work algorithms do not cause instability/centralisation. This binding is embodied in the calculation of influence for Benchmarkers. The adoption of an algorithm is then calculated using each Benchmarker’s influence and the fraction of qualifiers they computed using that algorithm.
TIG combines a crypto-economic framework with OPoW to:
-
Incentivise miners, referred to as Benchmarkers, to adopt the most efficient algorithms (for performing proof-of-work) that are contributed openly to TIG. This incentive is derived from sharing block rewards proportional to the number of solutions found.
-
Incentivise contributors, known as Innovators, to optimise existing proof-of-work algorithms and invent new ones. The incentive is provided by the prospect of earning a share of the block rewards based on adoption of their algorithms by Benchmarkers.
TIG will progressively phase in proof-of-works over time, directing innovative efforts towards the most significant challenges in science.
PoW vs oPoW
Traditionally, Proof of Work (PoW) systems (like Bitcoin) relies on miners to solve a cryptographic computational problem in order to create new blocks and earns all its rewards. This means the influence of a miner in a PoW system is proportional to their computational power, however this is not a metric which is explicitly calculated.
In contrast, with Optimisable Proof of Work (OPoW), solving computational problems is decoupled from creating blocks, allowing many solutions to be found per block.
The influence of a miner/Benchmarker in OPoW is explicitly calculated from their number of solutions compared to other miners/Benchmarkers, allowing block rewards to be shared amongst all miners/Benchmarkers based on their influence.
To smooth out fluctuations in influence, TIG gives solutions a lifespan of 120 blocks, where solutions contribute towards influence over that period.
Benchmarker Influence
OPoW introduces a novel metric, imbalance, aimed at quantifying the degree to which a Benchmarker spreads their resources amongst the factors unevenly. Factors in TIG includes multiple challenges (proof-of-work) and weighted deposit (proof-of-deposit).
The metric is defined as:
where is coefficient of variation, is a set of numbers between 0.0 to 1.0 (represent fraction of a particular factor), and is the number of active challenges. This metric ranges from 0 to 1, where lower values signify less centralisation.
Penalising imbalance is achieved through:
where is a coefficient (currently set to 1.5). The imbalance penalty ranges from 0 to 1, where 0 signifies no penalty.
Block rewards are distributed pro-rata amongst Benchmarkers based on their influence. This is calculated in such a way that Benchmarkers are incentivised to minimise their imbalance:
Notes:
-
A Benchmarker focusing solely on a single factor will exhibit a maximum imbalance and therefore maximum penalty.
-
Conversely, a Benchmarker with an equal fraction across all factors will have a minimum imbalance value of 0.
Challenge Factor
A Benchmarker’s fraction of a challenge factor is calculated using their qualifying solutions and reliability:
Reliability is a metric based on a Benchmarker’s ratio of solutions to nonces (aka solution ratio), and is designed to incentivise adoption of algorithms that effectively balance the exploration-exploitation trade-off within limited computational budgets:
Notes:
-
Reliability is currently capped to 25.0
-
A Benchmarker’s solution ratio is calculated from their qualifying benchmarks
-
Average solution ratio is calculated from qualifying benchmarks across all Benchmarkers, weighted by number of qualifiers
Weighted Deposit Factor
A Benchmarker’s fraction of weighted deposit is calculated using the weighted deposits delegated to them:
A Benchmarker’s weight deposit factor is currently limited to 1.25x their average challenge factor.
Example:
If a Benchmarker has an average challenge factor = 10%, then their weighted deposit factor is limited to 12.5% (10% x 1.25). Even if your weighted deposit factor is 50%, it will be limited to 12.5%
Cutoff Mechanism
A Benchmarker’s cutoff is the maximum number of their solutions per challenge that can qualify and earn rewards. If your cutoff is 0, no matter how many solutions a benchmarker submits, they will not earn any rewards. The cutoff is calculated as follows:
This means that a Benchmarker must benchmark every challenge in order to earn tokens.
With the introduction of PoD, cutoff is now also limited by a Benchmarker’s deposit.
Any solution that is cutoff will not be considered for rewards and essentially be ignored(would have no effect on the rest of the system). Also, it won’t raise the difficulty for the challenge.
Determining Qualifiers
There is a threshold of 5000 qualifiers per challenge. If the number of qualifiers exceeds this threshold, then solutions with higher difficulty will have greater priority over lower difficulty solutions when determining which solutions are qualifiers.
This is implemented using Pareto Frontier Mechanism:
- All benchmarks for a challenge are sorted by difficulty
- Filter for benchmarks with difficulties on the Pareto Frontier
- Add filtered benchmarks to a list, and update a Benchmarker’s stats:
- Number of nonces
- Solution ratio
- Calculate number of qualifiers by summing number of solutions across filtered benchmarks
- Only consider benchmarks whose Benchmarker has stats that meet a minimum threshold
- at least 100 nonces
- solution ratio must be at least 10% of the average (from previous block)
- Repeat 2-4 until the number of qualifiers is at least 5000

challenge parameters for each challenge can be found in Challenges.
There can be more than 5000 qualifiers per challenge, and only the qualifying solutions will affect difficulty and earn rewards.
Difficulty Adjustment
The difficulty for a challenge is adjusted based on the total number of qualifiers found by the Benchmarkers. Qualifiers are filtered and sorted by difficulty, and the easiest qualifying frontier(called ) is filtered out.
The is then calculated for a challenge:
Where is currently set to 5000.
The is limited to 1.125
is then computed by scaling the by .
The and form the valid difficulty range for a block, where any difficulty within this range is allowed to be selected for benchmarking.
NOTES:
- If the > 1, the difficulty range for that Challenge is increasing in difficulty.
- If the < 1, the difficulty range for that Challenge is decreasing in difficulty.
- For a given amount of compute, it is expected that the difficulty range will keep increasing until there are approximately 5000 solutions submitted every 120 blocks. i.e. settles at 1.
Algorithm Adoption
The adoption of an algorithm is calculated using each Benchmarker’s influence and the fraction of qualifiers (for a particular challenge) they computed using that algorithm.
We first calculate the weights across all algorithms:
Influence is used to prevent manipulation of adoption by Benchmarkers. It disincentivises Benchmarkers from focusing on a single algorithm, as their influence would be low.
To calculate the adoption of an algorithm, we normalise the weights across all algorithms for each challenge:
Breakthrough Adoption
Multiple algorithms can be attributed to the same breakthrough, so we calculate the adoption of a breakthrough as the sum of the adoption of all algorithms that attributes it.