OPoW

OPoW - Optimisable Proof of Work

TIG has developed a novel variant of proof-of-work called optimisable proof-of-work (OPoW).

Optimisable proof-of-work (OPoW) uniquely can integrate multiple proof-of-works, “binding” them in such a way that optimisations to the proof-of-work algorithms do not cause instability/centralisation. This binding is embodied in the calculation of influence for Benchmarkers. The adoption of an algorithm is then calculated using each Benchmarker’s influence and the fraction of qualifiers they computed using that algorithm.

TIG combines a crypto-economic framework with OPoW to:

  • Incentivise miners, referred to as Benchmarkers, to adopt the most efficient algorithms (for performing proof-of-work) that are contributed openly to TIG. This incentive is derived from sharing block rewards proportional to the number of solutions found.

  • Incentivise contributors, known as Innovators, to optimise existing proof-of-work algorithms and invent new ones. The incentive is provided by the prospect of earning a share of the block rewards based on adoption of their algorithms by Benchmarkers.

TIG will progressively phase in proof-of-works over time, directing innovative efforts towards the most significant challenges in science.

PoW vs oPoW

Traditionally, Proof of Work (PoW) systems (like Bitcoin) relies on miners to solve a cryptographic computational problem in order to create new blocks and earns all its rewards. This means the influence of a miner in a PoW system is proportional to their computational power, however this is not a metric which is explicitly calculated.

In contrast, with Optimisable Proof of Work (OPoW), solving computational problems is decoupled from creating blocks, allowing many solutions to be found per block.

The influence of a miner/Benchmarker in OPoW is explicitly calculated from their number of solutions compared to other miners/Benchmarkers, allowing block rewards to be shared amongst all miners/Benchmarkers based on their influence.

To smooth out fluctuations in influence, TIG gives solutions a lifespan of 120 blocks, where solutions contribute towards influence over that period.

Benchmarker Influence

OPoW introduces a novel metric, imbalance, aimed at quantifying the degree to which a Benchmarker spreads their computational work between challenges unevenly. This is only possible when there are multiple proof-of-works.

The metric is defined as:


imbalance=Cv(%qualifiers)2N1imbalance = \frac{C_v(\%qualifiers)^2}{N-1}

where CVC_V is coefficient of variation, %qualifiers\%qualifiers is the fraction of qualifiers found by a Benchmarker across challenges, and NN is the number of active challenges. This metric ranges from 0 to 1, where lower values signify less centralisation.

Penalising imbalance is achieved through:


imbalance_penalty=1exp(kimbalance)imbalance\_penalty = 1 - exp(-k \cdot imbalance)

where kk is a coefficient (currently set to 1.5). The imbalance penalty ranges from 0 to 1, where 0 signifies no penalty.

When block rewards are distributed pro-rata amongst Benchmarkers after applying their imbalance penalty, the result is that Benchmarkers are incentivised to minimise their imbalance as to maximise their reward:

A Benchmarker’s influence is the fraction of block rewards they earn. Influence is calculated as:

influenceaverage(fraction_of_factors)(1imbalance_penalty(fraction_of_factors))influence \propto average(fraction\_of\_factors) \cdot (1 - imbalance\_penalty(fraction\_of\_factors))

Where the factors can be the Benchmarker’s number of qualifiers for a particular challenge or their weighted deposit(after the introduction of PoD).

Notes:

  • A Benchmarker focusing solely on a single challenge will exhibit a maximum imbalance and therefore maximum penalty.

  • Conversely, a Benchmarker with an equal fraction of qualifiers across all challenges will demonstrate a minimum imbalance value of 0.

Cutoff Mechanism

A Benchmarker’s cutoff is the maximum number of their solutions per challenge that can qualify and earn rewards. If your cutoff is 0, no matter how many solutions a benchmarker submits, they will not earn any rewards. The cutoff is calculated as follows:


cutoff=floor(min(#solutions_across_challenges)×1.1)cutoff = \text{floor} \left (\min(\#solutions\_across\_challenges) \times 1.1 \right)


This means that a Benchmarker must benchmark every challenge in order to earn tokens.

With the introduction of PoD, cutoff is now also limited by a Benchmarker’s deposit.

Any solution that is cutoff will not be considered for rewards and essentially be ignored(would have no effect on the rest of the system). Also, it won’t raise the difficulty for the challenge.

Determining Qualifiers

There is a threshold of 5000 qualifiers per challenge. If the number of qualifiers exceeds this threshold, then solutions with higher difficulty will have greater priority over lower difficulty solutions when determining which solutions are qualifiers.

This is implemented using Pareto Frontier Mechanism.

All the solutions are then sorted by their difficulty to calculate the Pareto frontier. The frontier is then used to determine the qualifying solutions. The process is repeated until the number of qualifiers exceeds 5000.


lock-1

challenge parameters for each challenge can be found in Challenges.

ℹ️

There can be more than 5000 qualifiers per challenge, and only the qualifying solutions will affect difficulty and earn rewards.

Difficulty Adjustment

The difficulty for a challenge is adjusted based on the total number of qualifiers found by the Benchmarkers. Qualifiers are filtered and sorted by difficulty, and the easiest qualifying frontier(called base_frontierbase\_frontier) is filtered out.

The scaling_factorscaling\_factor is then calculated for a challenge:


scaling_factor=total_num_qualifiersthreshold_num_qualifiersscaling\_factor = \frac{total\_num\_qualifiers}{threshold\_num\_qualifiers}


Where threshold_num_qualifiersthreshold\_num\_qualifiers is currently set to 5000.

ℹ️

The scaling_factorscaling\_factor is limited to 1.125

scaled_frontierscaled\_frontier is then computed by scaling the base_frontierbase\_frontier by scaling_factorscaling\_factor.

The base_frontierbase\_frontier and scaled_frontierscaled\_frontier form the valid difficulty range for a block, where any difficulty within this range is allowed to be selected for benchmarking.

NOTES:

  • If the scaling_factorscaling\_factor > 1, the difficulty range for that Challenge is increasing in difficulty.
  • If the scaling_factorscaling\_factor < 1, the difficulty range for that Challenge is decreasing in difficulty.
  • For a given amount of compute, it is expected that the difficulty range will keep increasing until there are approximately 5000 solutions submitted every 120 blocks. i.e. scaling_factorscaling\_factor settles at 1.

Algorithm Adoption

The adoption of an algorithm is calculated using each Benchmarker’s influence and the fraction of qualifiers (for a particular challenge) they computed using that algorithm.

We first calculate the weights across all algorithms:


X=num_qualifiers_the_benchmarker_found_using_that_algorithmX = num\_qualifiers\_the\_benchmarker\_found\_using\_that\_algorithm


Y=total_qualifiers_the_benchmarker_foundY = total\_qualifiers\_the\_benchmarker\_found


algorithm_weight+=benchmark_influence(X/Y)algorithm\_weight + = benchmark\_influence \cdot (X / Y)


Influence is used to prevent manipulation of adoption by Benchmarkers. It disincentivises Benchmarkers from focusing on a single algorithm, as their influence would be low.

To calculate the adoption of an algorithm, we normalise the weights across all algorithms for each challenge:


algorithm_adoption=normalise(weights_across_all_algorithms)algorithm\_adoption = normalise(weights\_across\_all\_algorithms)


Breakthrough Adoption

Multiple algorithms can be attributed to the same breakthrough, so we calculate the adoption of a breakthrough as the sum of the adoption of all algorithms that attributes it.