🥢On-chain-Subjective Slashing Framework
need to fix math notation
Introduction
Slashing refers to penalizing protocol participants who deviate from protocol rules by removing a portion of their staked assets. This mechanism is unique to PoS protocols, as it requires the blockchain to enforce such penalties.
On-chain subjective slashing refers to penalizing nodes for faults that cannot be attributed to validators based solely on on-chain evidence or protocol rules. In oracle networks, faults are typically of the form of submitting inaccurate data in an attempt to manipulate the oracle.
A key challenge in implementing robust slashing is the risk of nodes getting slashed due to honest mistakes. This concern is amplified in the subjective case, since determining whether a fault occurred is debatable — beyond just determining if it was committed maliciously or honestly.
Therefore, a prerequisite to slashing is the ability to detect misreports reliably and qualitatively. We outline different considerations for designing such a mechanism. We then need to establish appropriate penalty policies completing the detect and deter mechanism.
Purpose and Scope
We propose a minimal, stylized mathematical model to analyze how the slashing mechanism should be designed. We abstract away some details for both simplicity and generality, making the analysis relevant to a general data oracle rather than a specific type. The concrete example of price reports is kept in mind and given special attention throughout the document.
More specifically, we aim to achieve the following:
Establish a common framework of terminology and concepts to enhance communication within the company
Derive basic principles and guidelines for an optimal solution.
Better articulate considerations, limitations, and inherent trade-offs, and suggest several options along different points on the trade-off curve.
Lastly, we note that implementation details are out of scope.
Objective
First, we outline the goals we want detection and penalties to achieve in a non-formal way. We deliberately avoid formalizing the desired properties at this point, instead stating general objectives to keep in mind.
Discourage misreports. Requires the ability to detect faults and penalize appropriately to deter initial misconduct.
Avoid penalizing honest mistakes. Necessitates the ability to differentiate between honest errors and malicious actions.
Avoid discouraging risk-taking: The penalty system should not discourage participants from reporting abrupt and sharp changes in data values.
Discourage uninformed voting. Examples of uninformed voting strategies include following the majority vote or consistently reporting the last aggregated result.
Prevent correlated attacks. As defined and described below
These objectives establish the foundation for developing effective detection and penalty mechanisms. Let us now examine our model.
Model
We introduce a model that is as general as possible, leaving the data domain, aggregation method, and metric unspecified. Here is the basic setup:
The protocol consist of data fetchers , submiting reports in rounds (for example every block). Let be the report of player at round , and assume the reports belong to the domain . We use two special characters and to denote non-reports and out-of-domain reports, respectively. Namely, we denote if and only if player did not report at time , and assume without loss of generality that in case . In each round all reports are aggregated to a single value by an aggregation function . The aggregated value is denoted by .
After each round, we compute two functions:
Detection function: . Where corresponds to an honest report' correspond to a fraud and is the history, consisting of all past reports and past decisions.
Penalty function : , corresponds to the amount of stake to be slashed from player .
Detecting Faults When Truth Is Not Verifiable
By "non-verifiable truth," we mean that the value validators report on cannot be objectively verified, either because it is inherently unknowable or because the protocol cannot access it.
We refer to various methods and approaches to identify fraud as filters. These filters have different properties:
Type of proof: What information does the filter require - can it be applied using only on-chain data? What strength of evidence does this filter provide?
Cost: This includes expenses such as gas consumption for on-chain computation.
Execution time
Precision and recall: Different filters are located at different spots on the precision-recall tradeoff curve and, in particular, have different tendencies for false positives and false negatives.
We now describe and analyze different types of filters. We classify them into three classes: Logical Conditions, Statistical Tools, and Filters Based on Human Judgment.
Logical Triggers
Defined conditions that can be automatically checked and enforced. These should be determined within the protocol setup phase and updated periodically. Examples are:
Logic&Physics: Methods involve studying the underlying function and identifying major deviations.
. Submissions outside of domain.
. Reports distanced from the protocol result. Note this result is not known to validator on submission.
. Reports distanced from the last protocol result (known to validator upon submission)
. Reports distanced from the last report of the validator.
Robust against a malicious majority (coordinated attack).
Does not account for the lazy strategy of submitting the same response without gathering information on the actual value.
May discourage reporting on a sudden change in value.
Preliminary analysis of the specific data of interest - a comprehensive domain analysis should be conducted to determine and define:
Defining the domain of valid reports and being able to determine if a report falls within this domain (compute predicate )
Provide a metric function
Specify resonable changes of the function of value per time to determine .
Social: Evaluating a report relative to other reports in the same round. Most naturally is relative to the aggregation result: . We can also consider comparison to other function of . Does not defend against a malicious majority (coordinated attack).
More options to what value we compare when deciding on .
Compared to other reports .
Compared to the validator past reports .
Compared to the aggregation result. This is not the same as (1) because aggregation also takes into account validator stakes.
Compared to where are weights derived from validator stake. This is a generalization of (3).
Statistical Evidence
Advanced statistical methods such as anomaly detection, comparing data against known fraud indicators or patterns, and machine learning algorithms that learn from historical fraud data to predict or identify fraudulent behavior. In more details different filters in this class include:
Anomaly Detection: Using statistical models to identify unusual behavior that deviates from the norm, indicating potential fraud.
Pattern Recognition: Analyzes historical data to identify patterns associated with fraudulent activities, helping to predict and detect similar attempts in the future.
Reinforcement Learning: Employs algorithms that learn from data over time to improve the detection of fraudulent transactions automatically.
Rule-based Systems: Applies a set of predefined rules based on known fraud scenarios to detect fraud. These systems are often used in conjunction with other methods to enhance detection capabilities.
This method can be used to evaluate validators' long-range performance (discussed below) and trigger an alarm before an incident occurs.
Human Judgment based filters
Human oversight and judgment serve as an additional verification layer. For example:
Committee Voting: A committee of validators or stakeholders can vote to resolve disputes and identify faults. Ideally, the committee is a large, external, and impartial jury committed to participating in informed voting.
Random Validator Selection: A randomly selected validator is used to validate suspicious reports. This method is more affordable and faster than a full committee vote. The set from which the validator is chosen can be distinct from the protocol validators set.
Whistleblowing Mechanism: Any validator can submit a bond and raise a challenge against another validator.
Each filter in this class presents a mini mechanism design challenge of its own, as participants' incentives must be properly aligned to ensure the filter's effectiveness. For example, voters should be incentivized to vote according to their true beliefs.
Evaluating Validators Performance
Each validator receives a rating that reflects their overall protocol performance. This rating enables a reputation system that strengthens the slashing mechanism in two key ways:
First, it can be viewed as a statistical filter, serving as a tie-breaker in case of a dispute. For example, it supports more informed voting, thus enhancing a committee voting mechanism.
Second, it adds an additional penalizing dimension by allowing us to decrease a validator's rating instead of slashing their stake. This is especially helpful in cases of minor faults or faults that are not fully attributable or cannot be proven to be committed maliciously. That is, we can expand the model and denote by introducing . Penalty function is updated to reflect the fact that a penalty can be a reduce in reputation.
Reputation
Reputation can be based on and measured by the following criteria:
Participation Rate: Measure the frequency and consistency of the validator's participation in protocol activities.
Report Accuracy: Evaluate how closely the validator's reports align with the aggregated results\ other reference.
Prediction of Abrupt Changes: Assess the validator's ability to predict sudden and significant changes in data values.
Trend Prediction Accuracy: Gauge the accuracy of the validator's predictions regarding long-term trends. For example let be two points in time where . We can check if during the interval validator reports where of an ascending nature (there are various methods to measure it, most naive one is the number of indices such that
Conformity to Expected Voting Distribution: Determine how closely the validator's votes match the expected distribution pattern. For example, assign each validator a vector where We expect . Additional statistical tests of greater complexity can be applied, but this demonstrates the core concept.
Penalties with reputation system
A reputation system can be used to relax actual stake slashing. Alternatively, we can consider:
Decreasing profile rate. For certain faults, actual slashing occurs only if the validator rate falls below a specific threshold.
Decreasing effective stake. Define:
Through the use of effective stake rating decrease impacts rewards and future income from the protocol. We can decrease effective stake either directly (decreasing ) or indirectly by decreasing reputation.
Reducing rewards and future income from the protocol.
Revoke - Prohibiting from the Protocol
Combining different filters
Denote the filter applied in level and returns the probability for a mis-report with . Assume filters . In this section we discuss different methods for combining different filteres into a robust mechanism.
Consideration
Avoid expensive computation if possible.
Some filters serve as an alarm before the fact, and not only apply after the event has occurred.
Complementing and correlated filters. For example, an initial definition for correlation can be considering a couple correlated if .
Complementing means both of them alerting is strong evidence.
Correlated means that if one of them is on, it is less surprising that the other one is also on.
Example for taking correlation into account: If then or was (w.h.p) manipulated and is not reliable. In this case, should be compared to an alternative aggregated value . In any case, in such scenario conditions should be altered.
Chain Filter: A Baseline Design
Levels are ordered based on cost, execution time, and "type of proof". The ideal proof is logical and the last resort involves matching with off-chain information
We suggest a chain-filter mechanism, modeled after the multilevel court structure. This involves applying a sequence of filters, starting with cost-effective, quick filters, and moving to more expensive ones if needed. At each step, we either make a definitive decision or advance to the next level. To prevent "honest slashing," we prioritize precision at each stage. If a situation is unclear, we turn to more robust, costly methods to classify a report as fraudulent.
Formally, at every step we compute a function . If the result is either 0 or 1 to the report is considered honest or fraud respectively and the detection process is over. means we could not reach a decision and we continue to compute. level functions satisfies:
Meaning that once a definite decision is reached inside a level, we conclude if the report is fraud or not. Note:
We only convict nodes along the way.
This is a general design and different data may require different filters and ordering
We can apply filters in a “surprise inspection” manner
Inner-level and Inter-level Tuning
In addition to deciding where to position ourselves on the precision-recall curve, we need to outline the relationships between different levels. Considerations include the keys by which the levels are sorted - type of proof provided, cost, and execution time.
We assign each player a number which represents the probability that the player report is a fraud. That is, “definitely a fraud” corresponds to and “definitely not a fraud” corresponds to . This number could be the output of an AI algorithm, as mentioned above.
A natural rule for deciding on who reported false information is a threshold rule: we decide on a threshold and determine that reports with are considered as frauds. The threshold can be adjusted to fit the system's specific requirements.
Improvement 1 - Complementing and correlated
Different layers contain multiple filters, each connected by an 'and' relation, while the relationship between layers can be either 'or' or 'veto.' A 'veto' condition is not necessarily positioned in the first layer due to considerations of cost and execution time; it may be used as a last resort because of these factors.
Summary
This framework proposes a comprehensive approach to on-chain subjective slashing in oracle networks, addressing the challenge of penalizing inaccurate data submissions while protecting honest validators. Key components include:
Multiple filtering layers combining automated detection (statistical models, pattern recognition) with human judgment mechanisms
A reputation system that provides an additional dimension for penalties and serves as a statistical filter
A chain-filter mechanism that progresses from cost-effective to more expensive verification methods
Flexible threshold rules and inter-level relationships that can be tuned based on specific system requirements
The framework aims to balance precision and recall while maintaining economic feasibility and execution efficiency.
Last updated