Design against Bad Actors

Bad Actors in Privacy Systems

The combination of the permissionless nature of blockchains and the anonymity that shielded pools (such as the one described in this document) enable inevitably introduces the issue of bad actors being able to move funds in an untraceable way. A frequently cited example is a black-hat hacker depositing stolen funds from a bridge hack into a shielded pool with the intention of later withdrawing them, thereby erasing the link to the initial deposit. This effectively "launders" the funds, i.e., hides their actual origin.

Such funds that have gone through a shielded pool can then be safely deposited into any centralized exchange (CEX) or off-ramp, without the risk of the funds being frozen. Thus, privacy systems based on shielded pools (which constitute the majority, if not all such systems) that do not implement any money laundering countermeasures are a perfect tool for bad actors—hackers and beyond—to exit with stolen or otherwise illicit funds.

Why Care

We believe that, along with UX issues, the problem of bad actors is the major obstacle for privacy systems to be widely adopted. Indeed, there is a number of reasons why this issue cannot be simply ignored:

  • Because of the use for illicit activities, privacy systems are considered shady and undesired among many blockchain users.

  • Legitimate users of shielded pools are constantly worried that the protocol they use might become sanctioned in one or several countries (the major example being TornadoCash sanctioned by OFAC). In such a case the users might be seriously affected: either by landing on a specific blacklist, or even having issues withdrawing their funds.

  • Privacy systems, while enabling privacy for legitimate users, also make it easier for bad actors to exit from crypto assets to fiat. It's by no means easy to assess whether the net effect is good in the end.

Without attempts to improve the protocol design of privacy systems with countermeasures against bad actors exploiting them it is unlikely this technology (even though very mature on the cryptography layer) is unlikely to reach adoption.

Categorization of Approaches

The importance of the problem of bad actors in privacy systems is widely recognized and consequently, numerous approaches have been proposed to deal with it. Below we propose a simple yet comprehensive categorization of solutions.

The premise of privacy systems is to make certain blockchain actions anonymous. On the other hand, to counter bad actors and money laundering, the protocol is enriched with some mechanisms to reveal the authors of these anonymous actions. The categorization is based on who can decide a reveal must happen:

  • Voluntary Reveal: this is a category of solutions in which the user is fully in charge of their anonymity, and only the user might decide to reveal its action to a given set of parties.

  • Involuntary Reveal: in this category there might be forced reveals: even if the user does not agree.

Below we discuss some approaches from both categories.

Voluntary Reveal -- Approaches

Viewing Keys

Viewing keys are way for a user to reveal to a party some subset of their anonymous actions. Technically, the user sends a piece of cryptography to a given party, which allow them to reveal some of the blockchain transactions as the users' (and their details). The idea of viewing keys is to use them upon request from a particular actor: an auditor, or a CEX/offramp to prove that the funds that went through a shielded pool have legitimate origins. This way the user can reveal some of its actions to a set of actors they trust (or they are forced to trust because of specific regulations).

Effectiveness in Countering Bad Actors. While the idea of selective reveal is powerful and lets the user stay in control of their own data, we believe that it does not solve the problem of asset laundering in shielded pools. This is because revealing the user trace is completely voluntary and thus the bad actors will simply not do that. One could argue that institutions such that CEXes, offramps and others that deal with user deposits should require reveals via viewing keys and implement legitimacy checks for fund origins, and in such a case viewing keys would be a satisfactory solution to our problem. This is a valid point, however:

  • These institutions will simply not make reveals obligatory. And even if they do, then the process of forcing them to implement these measures will take too much time for this to be viable.

  • If every institution asks the user for source of funds, then the value of the shielded pools quickly deteriorates, because the revealed data can be simply leaked or even sold for profit.

Conclusion. Viewing keys are useful to have, as it gives the user the ability to audit their history by third parties if the user chooses to do so. We believe each shielded pool should implement some form of viewing keys (as our Shielder does) as technically it's not a complex addition and it enables important user-facing features. However, viewing keys alone are not enough to solve the problem of bad actors mixing illicit funds.

Proof of Innocence

The concept of "proofs of innocence" has been independently proposed by several researchers in the space and is a crucial part of the Privacy Pools design by Buterin et al. The main idea is as follows:

  1. We allow every participant to join the pool by depositing assets.

  2. Upon leaving the pool (or later, upon using the withdrawn funds), the user can generate a zk-proof that their corresponding deposit to the pool (this is assuming for simplicity that deposits and withdrawals are 1-to-1) is a) either part of a specific "whitelist" set of deposits, b) or is NOT part of a specific "blacklist" set of deposits.

The maintenance of whitelists and blacklists is independent from the protocol, also there can be multiple lists, and the user can prove legitimacy against several such lists, if necessary. Such blacklists are expected to consist of "illicit" deposits -- assets that were proved to be either stolen or obtain illegally. Of course there can never be clear, objective rules as to what to include in such a list, however this is a problem out of scope of this write-up.

This idea can be seen as a generalization of Viewing Keys, in the sense that the user is not revealing concrete deposits, but instead proves inclusion in a set of deposits. One could wonder why did we include it in the "Voluntary Reveal" category. This is because no details of the user are ever revealed without the user's permissions. Even if the protocol does not allow withdrawal without proving innocence against a blacklist, then this does not force the users' de-anonymization. Indeed, they can stay in the pool indefinitely. This might not sound like a problem, but as we explain in the analysis below -- it still causes significant trouble.

Effectiveness in Countering Bad Actors. It is worth mentioning that the proof of innocence concept is very general and can be applied at a few different steps: upon deposit to pool, upon withdrawal from pool or upon deposit to a third party of funds from the pool. The last option -- when a 3rd party institution checks the proof of innocence faces the same issue as Viewing keys. The institutions are reluctant and slow to implement such measures, and as long as there is at least one major off-ramp platform that does not implement such measures, then the solution just doesn't work as a whole. When it comes to generating proofs upon deposit or withdrawal from pool: there is an inherent flaw that makes this solution ineffective. Whatever proofs are the users forced to generate are against the CURRENT blacklist or whitelist. It is often the case that particular funds are deemed illicit (and blacklisted) only after a long period of time has passed. This might be enough for the bad actor to deposit to pool, wait, and withdraw without their deposit ever appearing on a blacklist. Once their deposit is blacklisted, they might be long gone, or even worse, they might deposit to the pool for the second time, and there is no way one can link the second deposit to the illicit initial deposit. One cannot conclude that this has 0 efficiency, because it just makes the job of bad actors harder, and they need to worry about the timing of blacklisting. However, all in all, this is far from satisfactory, as the bad actors can game the system.

Conclusion. Similarly as viewing keys, proofs of innocence is a nice idea that has a chance of becoming effective once large institutions start implementing appropriate measures against bad actors. However, at the current stage, they are not sufficient to solve the problem by themself. In the vanilla version proofs of innocence can be easily gamed (by depositing and withdrawing early) and their effectiveness largely depends on the quality of the maintained blacklists, which is not an easy problem.

Involuntary Reveal -- Approach

Instead of listing concrete instances of this approach, we just describe the main idea. One concrete variant is described in Anonymity Revokers.

The idea of involuntary reveal is quite simple: since bad actors join the shielded pool in order to launder their funds, we want a mechanism to completely reveal the trace of a user originating from a particular deposit. In other words: if an illicit deposit is made, the system will make sure to reveal all actions of the user who made this deposit and prevent the user from gaining anonymity in the shielded pool. In particular, the withdraw transaction will be fully linkable to the deposit, hence shielding the tokens has no effect for such a user. Crucially, the user does not have to consent to be deanonymized -- that's why this is called an "involuntary reveal".

Technically, implementing involuntary reveal typically involves some kind of encryptions being published along transactions that can, under certain circumstances be decrypted by certain actors in the system. There must also be a process in place that allows to decide which users in the system should be deanonymized, and a mechanism that makes sure that deanonymization happens only when such a decision is taken. Cryptographically, there are a few approaches to implement such a mechanism, below we list them starting from most exotic and least feasible, towards simpler ones and more pragmatic:

  • Witness Encryption: allows to create ciphertexts that are decryptable only if a certain event happens on-chain, for instance a governance decision is made to deanonymize a certain user. This is however far from practical.

  • Threshold Decryption: a committee of nodes holds a shared key and is expected to use it only upon deanonymization requests.

  • TEE: trusted hardware holds a key and is programmed so that only upon receiving a certificate that a user must be deanonymized, it acts on it.

  • Trusted Party: a trusted institution holds the decryption key.

Effectiveness in Countering Bad Actors. Involuntary reveal is brutally effective in countering funds laundering. In fact, it's so effective that bad actors are unlikely to even try laundering the funds in such a shielded pool, since that would be wasted effort and wasted time. Note that this approach does not suffer from the problem as with innocence proofs where the bad actor would enter and leave the pool in a short period of time. In this case, the bad actor sure can do that, but then retroactively the deposit and withdraw transaction would be still linked, and hence blockchain analysis companies and/or law enforcement can still trace such users.

Downsides. While, as stated, involuntary reveal is very effective in countering bad actors trying to launder funds in shielded pools, this approach is not without downsides. What the user fundamentally loses is the full control over deciding who they want to reveal their transaction trace. Depending on how the anonymity revoking is specifically implemented this can be more or less worrying, however no matter how this is done, the concern could be:

  • a user might be deanonymized unjustly,

  • a user might be deanonymized as a result of leaked keys of anonymity revokers,

  • in certain implementations -- some parties might track all users without ever this becoming apparent.

These concerns are all valid, yet we believe that with the cryptography foundations of anonymity revoking becoming more mature we will be able to reduce the severity of these concerns. Also, as thoroughly discussed above, all the remaining approaches do not seem to be sufficient and do not really solve the problem, hence we must accept trade-offs.

Other Approaches

We describe several other ideas that have been proposed or mentioned in the community as possible countermeasures against asset laundering in shielded pools. For each we assess their viability.

User Gating (KYC)

There is a misconception that if we force KYC upon entering the shielded pool, then the problem of laundering funds in shielded pools magically disappears. This is far from true, and in reality KYC does not help at all. Indeed, if the withdraws and deposits are unlinkable (the main premise of shielded pools) then there is not much one can do after learning that a specific deposit consisted of funds from illicit activities. Indeed, the funds will be withdrawn via a unlinkable transaction. The knowledge of the identity of the depositor is not really helpful to prevent using the pool for mixing:

  • knowing the identity of the user does not prevent mixing in any way,

  • for a bad actor getting a counterfeit KYC is very easy, so it's not even possible to prosecute the individual whose identity was used to deposit, because this is likely not the offender.

Given the above, all what it takes to circumvent this countermeasure is to buy a KYC -- something that bad actors are known to do.

Rate Limiting

The most dangerous actions in shielded pools are always large deposits and large withdrawals -- this is typically how a bad actor would try to mix their funds. Hence one could propose a countermeasure in the form of suitable withdraw and deposit limits per user.

There are a few problems with this solution:

  • To even begin implementing this countermeasure, one must have a way to achieve sybil-resistance, this is possible but tricky in privacy systems. If there is no sybil resistance, then a user will circumvent limits by splitting its identity into multiple.

  • Even with sybil resistance in the strongest possible form: KYC, there is nothing preventing a bad actor from buying multiple identities and splitting the funds to launder into multiple accounts. The cost of obtaining new KYCs is likely negligible compared to the value of assets they aim to mix.

  • There are legitimate uses of shielded pools where a user withdraws and deposits the same funds multiple times during the day. For instance when one implements interactions with external public contracts from the shielded pool. If each such interaction counts towards the limit, then the user might quickly hit the limit, which would deteriorate their experience.

  • Calibrating the limits might be very tough to balance the user experience vs countering laundering. Indeed, even if the limit is 1k USD daily, then a bad actor can launder 365k USD yearly, which is significant, and does not even pose any particular issue for the bad actor.

Last updated