From CryptoWiki


  • "Sharding is a type of database partitioning that separates very large databases the into smaller, faster, more easily managed parts called data shards. The word shard means a small part of a whole."

Different Sharding options

  • This post (12-2018) by someone from NEAR goes into the different ways of sharding. I advise to read the piece itself but some concepts will follow below:

The simplest Sharding, a.k.a. Beanstalk

Let’s start with the simplest approach to sharding, that we throughout this write-up will call a Beanstalk. This is also what Vitalik calls “scaling by a thousand altcoins” in this presentation.

In this approach instead of running one blockchain, we will run multiple, and call each such blockchain a “shard”. Each shard will have its own set of validators. Here and below we use a generic term “validator” to refer to participants that verify transactions and produce blocks, either by mining, such as in Proof of Work, or via a voting-based mechanism.

The Beanstalk design, though simple, is sufficient to outline some major challenges in sharding.

The first challenge is that with each shard having its own validators, each shard is now 10 times less secure than the entire chain. So if a non-sharded chain with X validators decides to hard-fork into a sharded chain, and splits X validators across 10 shards, each shard now only has X/10 validators, and corrupting one shard only requires corrupting 5.1% (51% / 10) of the total number of validators.

Which brings us to the second point: who chooses validators for each shard? Controlling 5.1% of validators is only damaging if all those 5.1% of validators are in the same shard. If validators can’t choose which shard they get to validate in, a participant controlling 5.1% of the validators is highly unlikely to get all their validators in the same shard, heavily reducing their ability to compromise the system.

Almost all sharding designs today rely on some source of randomness to assign validators to shards. Randomness on blockchain on itself is a very challenging topic and would deserve a separate blog post at some later date, but for now let’s assume there’s some source of randomness we can use.

Both the randomness and the validators assignment require computation that is not specific to any particular shard. For that computation, practically all existing designs have a separate blockchain that is tasked with performing operations necessary for the maintenance of the entire network. Besides generating random numbers and assigning validators to the shards, these operations often also include receiving updates from shards and taking snapshots of them, processing stakes and slashing in Proof-of-Stake systems, and rebalancing shards when that feature is supported. Such chain is called a Beacon chain in Ethereum and Near, a Relay chain in PolkaDot, and the Cosmos Hub in Cosmos.

Quadratic sharding

If the nodes operating the network, including the nodes in the Beacon chain, become four times faster, then each shard will be able to process four times more transactions, and the Beacon chain will be able to maintain 4 times more shards. The throughput across the system will increase by the factor of 4 x 4 = 16 — thus the name quadratic sharding.

State Sharding

Practically, under State Sharding the nodes in each shard are building their own blockchain that contains transactions that affect only the local part of the global state that is assigned to that shard. Therefore, the validators in the shard only need to store their local part of the global state and only execute, and as such only relay, transactions that affect their part of the state. This partition linearly reduces the requirement on all compute power, storage, and network bandwidth, but introduces new problems, such as data availability and cross-shard transactions

In part two the writer goes deeper into Sharding's two biggest unsolved challenges (12-2018): data availability and data validity.


"Sharding is the idea is that we have the same coins and the same consensus ,but we push it into different validator zones where transactions can happen that not everyone needs to verify. Sharding overcomes the challenge of "everyone needs to see everything". Generally speaking, we only have one really widely accepted way of doing sharding and that's a federated sidechain and some multisig quorum that signs off on what happens off-chain. If the federation decides to steal the money, the mainchain has no way of knowing if the federation is being honest or not.

There's also SPV proofs, which are generally considered to be not that viable or too risky. I personally don't think that miners should be trusted. Boiling any consensus down to an SPV proof makes me very nervous, I don't think we should go down that road.

Another sharding proposal is plasma. I'm not comfortable with the security boundaries of plasma. Many of the plasma people will assert that the security model is identical to lightning which I don't think is correct.

I've found that almost every sharding proposal can be broken by asking what happens if the shard does an information withholding attack. Just doing that analysis usually lets you figure out how to totally break the system."

A 51% attack is made easier if a network uses sharding to increase scalability. Sharding is where a DAG or blockchain is split up into many different pieces as it is faster to process 100 1/100ths of a network than it is the network as a whole. This causes a problem, as it means that an attacker can attack just some subsets of the network rather than the entirety of it. A 51% attack becomes possible with far less than 51% of the network hashpower.

The way by which a DAG shards also causes an issue. While all shards operate to the same protocol, they now only see parts of ongoing txs and associated history. This causes issues with preventing double spending.

"A research paper published by Cornell University explains that the chain can be manipulated by “shard takeovers.” This means a node can be corrupted and the compromised data can lead to severe and permanent losses. “Existing sharding proposals achieve efficiency scaling by compromising on trust,” the paper highlights."