Building Consensus or “Kicking the Can down the Road”? – Scaling Bitcoin Hong Kong Showcases Core BIPs

20151206_084138

The Hong Kong version of the Scaling Bitcoin workshop fulfilled its promise of delivering content from the leading developers in the Bitcoin Ecosystem. Although the bitcoin block size debate has been very adversarial, the spirit of the conference seemed more collaborative with occasionally heated, constructive debate over scaling the bitcoin network. We will ultimately have to wait to see if this translates to real consensus of bitcoin’s future.

On the first day, Adam Back, the President of Blockstream – a sponsor of the event, discussed fungibility in the context of scalability of bitcoin. Fungibility is when coins are equal and interchangeable, and a key property of any electronic cash system.

Back covered the positive as well as negative aspects of introducing fungibility mechanisms. Such fungibility mechanisms can increase the number of transactions or the size of the transactions. Some mechanisms also increase the degree of unspent transaction output (UTXO) as a side effect, which is really undesirable for the node size or scalability.

Back discussed the various tradeoffs of such mechanisms as CoinJoin, confidential transactions, linkable ring signatures, Zerocoin and Zerocash protocols, and “encrypted transactions” or “committed transactions.”

Back appears to like the last two examples. He said this:

The other more powerful fungibility or privacy mechanisms is Zerocoin and Zerocash protocols. They are not UTXO-compatible, you have an ever-growing UTXO set or double-spend database. Zerocash hides the value, zerocoin doesn’t. Each set has a denomination, you only have privacy in Zerocoin within the denomination. There are some zerocoin extensions that can hide the value. The transactions are quite large. Zerocash is more practical on size basis, but CPU expensive and uses some cutting-edge cryptography that we have less confidence on. Both of these systems have a key trusted setup trap door. But otherwise, zerocash is quite ideal for fungability and privacy.

Another type of fungibility mechanism that I proposed some time ago was encrypted transactions or committed transactions. It follows the Bitcoin model of having fungibility without privacy. It provides no privacy at all. It improves fungibility. The way it works is that you have two-phase validation. In the first phase, the miner is able to tell that the transaction hasn’t been spent. In the second phase, they learn who is being paid. The idea is that in the first phase, the miner has to mine the transaction, and the other one happens a day later maybe. In the second phase, all the miners learn an encryption key that allows them to encrypt the first phase transaction, tell that it is valid, and do final-stage approval. There is a deterrent to censoring the second stage transaction because the first one was already mined, and you would have to have a consensus rule to approve all valid second-stage transaction, or else you might orphan the entire’s day work which is quite expensive.

Back now believes that advanced fungibility systems might be more deployable than he previously thought as some space overhead can be reclaimed. In order to properly evaluate this, there needs to be consideration of more than just the raw size of the transactions. There also needs to be analysis of the fungibility and the savings in UTXO size and the number of transactions that would have otherwise been used.

Madars Virza, a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory, promoted Zero-Knowledge Proofs for bitcoin scalability. A zero-knowledge proof is a method where one party, a prover, can prove to another party, a verifier, that a given statement is true, without conveying any information apart from the fact that the statement is indeed true.

Virza claims that many of Bitcoin scalability problems can be traced back to questions of privacy. He noted:

If all transactions are public, then receiving the wrong coin could be disastrous because it could taint all of your other bitcoin. Solvency is another example. Exchanges could claim that proving solvency is a privacy liability, well you could use zero-knowledge proofs. Mining centralization could also push back issues of privacy. Zero-knowledge proofs could be helpful for all of the above.

Virza demonstrated how zero-knowledge proofs could be used for dealing with fungibility and solvency issues.

Bitcoin developer Peter Todd argued that in adversarial environments, blockchains don’t scale. Todd began by discussing the risks of the centralization of miners in China:

Before the United States learns about China’s new block, they are going to find another block. Who is going to win the tie? Depending on how the hashrate is distributed, it will be China and then eventually both sides will come to consensus. This is pretty well known feature of Bitcoin. This is how the system works. There’s a big catch though. When you talk about failure modes and incentives, you might ask well, what percentage of hashing power does the Chinese side need for them to come out ahead? For them to earn more money than the other side? You can do the math on this.

It turns out to be about 30% which is really interesting, because then you could ask if you are behind the Great Firewall, and I don’t have a better internet connection to the rest of the world, do I have an incentive to improve the situation? It’s not clear that you do in many scenarios, which means that if I am not in China, then how do I add to the decentralization of bitcoin? What’s the long-term progression of this?

Todd’s talk was somewhat controversial as he is against any significant increase in the block size. He proposes a “wait and see” strategy for bitcoin, a more circumspect or dithering approach depending on the perspective.

Todd said:

My proposal is simple. We should wait and see. We should not make hastey steps to go and push bitcoin down into a new trust model. If we do a block size increase, we should do something small, something within the same region that bitcoin operates in right now, and then see what happens in a few years.

This was not enough for some in the conference crowd. A bitcoin miner from China, perhaps responding to Todd’s concerns over the large percentage of hashing power in China, asked whether there are any better options than just waiting. He asked Todd the following:

There are two ways to do mining. You can build mining equipment. China way is like BTCC, raising money from each other, then using bitcoin to make .. then mining. There are two different ways. The Chinese way is more friendly to Bitcoin community, not like our people wanting to make a lot of hashrate and destroy the system. Everyone wants to protect the system. Every Chinese mining pool wants to protect the system. We want to come here to know, do you have a suggestion or a solution? Your suggestion is to wait for a few years, then we do something? But we want more better way, we want more communication, what better idea do you have? Don’t wait ? Are there better ideas tha tdon’t involve waiting?

These questions drew laughing and some applause from the audience, and not the answer the questioner desired. It was also interesting as this second Scaling Bitcoin event was set up in Hong Kong to attract bitcoin miners from mainland China.

Discussing the other end of the bitcoin block size debate, Johnathan Toomim, a bitcoin miner by trade, explained why Gavin Andresen’s BIP101 is his favorate proposal for scaling bitcoin. Toomim said:

My perspective on this is that scaling bitcoin is an engineer problem. My favorite proposal for how to scale bitcoin is bip101. It’s over a 20-year time span. This will give us time to implement fixes to get Bitcoin to a large level. A hard-fork to increase the block size limit is hard, and soft-forks make it easier to decrease, that is to increase it is, so we should have a hard-fork once or at least infrequent and then do progressive short-term limitations when we run into scaling limitations.

In theory, well, first let me say bip101 is 8 MB now, and then doubling every 2 years for the next 20 years, eventually reaching 8192 MB (8 GB). This is bip101. That 8 GiB would in theory require a large amount of computing power. 16 core CPU with 3.2 GHz per core. About 5000 sigops/sec/core. We would need about 256 GB of RAM. And you would need 8 GB every 64 seconds over the network. The kicker is the block propagation. With 1 gigabit internet connection, it would take 1 minute to upload an 8 GB block. You could use IBLT or other techniques to transmit a smaller amount of data at the time that the block is found, basically the header and some diffs.

The issue of which bitcoin scalability proposal to adopt heated up again on the mining panel, whose participants were representing more than 84% of the global hashing power. Moderated by Mikael Wang of BTCC, a large bitcoin miner and exchange provider, the panel included Liu Xiang Fu of Avalon, Pan Zhibiao of Bitmain, Robin Yao of BW, Wang Chun of F2Pool, Marshall Long of FinalHash, Sam Cole of KnCMiner and Alex Petrov of BitFury. The group provided answers to questions from the audience on the various proposals for scaling the bitcoin network, with most of the discussion covering Jeff Garzik’s BIP100 and Andresen’s BIP101. There seemed to more preference for the former proposal, with less agreement for combinations of both proposals.

kaiko-mining-pool-24h

Source: Kaiko.com

On the second day, Pieter Wuille, a Bitcoin Core committer and another Blockstream employee, presented his new proposal for scaling bitcoin. He believes that applying “segregated witness” would bring about the desired scalability. Wuille explains that “witness” is the signature inside transactions, which account for 60% of the data on the blockchain. Wuille proposes ignoring this data whenever possible.

Wuille said:

This is my proposal that we do right now. We implement segregated witness right now, soon. What we do is discount the witness data by 75% for block size. So this enables us to say we allow 4x as many signatures in the chain. What this normally corresponds to, with a difficult transaction load, this is around 75% capacity increase for transactions that choose to use it. Another way of looking at it, is that we raise the block size to 4 MB for the witness part, but the non-witness has same size. The reason for doing the discount, last slide, the reason for doing this discount is that it disincentivizes UTXO impact. A signature that doesn’t go into the UTXO set, can be pruned.

Another bitcoin core committer, Jeff Garzik, provided analysis of the features of the various Bitcoin scalability proposals, including BIP100, BIP101, BIP103 and BIP106.

He indicated that BIP100 shifts the block size selection to free market, avoids anointing developers, and gives miners input on fee income. The bitcoin community’s response has been that it gives miners to much control, miners can sell votes costlessly, and the limit increase is too large. In contrast, he highlighted how BIP101 and BIP103 are predictable and do not have free market sensitivity.

Obviously trying to avoid controversy, he added only a few specific comments about BIP101:

BIP101 has the theme of predictable growth. Immediate jump to 8 MB, doubles every 2 years. Activation is 750 of 1,000 blocks to indicate support, then a 2 week grace period to turn on. It’s predictable increase, so that’s good from a user perspective. There’s no miner sensitivity. The fee market is significantly postponed, as blocks are a limited resource, and transaction fees are bidding for that limited resource, if you have 4 MB of traffic in 8 MB max block size, then you have no fee competition so fees would stay low. Community feedback is that it’s a big size bump. There was negative community reaction to politics around bitcoin-xt drama.

That’s all I have to say about that.

Garzik concluded by saying that, for him, “vendor hats are off now.” He added that the Bitcoin Core can do lots of theorizing and testing, but the Internet is the “best test lab in the world.” The only way to get the full field data is actually do a real hard-fork. There is venture capital consensus that wants to be go beyond 1 MB block size, while. the technical consensus is that going above 1 MB presents security risks. He opined that this was “poor signalling” for users.

We have been kicking the can down the road, we have integrated libsecp256k1 to increase validation speed and validation cost. These are big metrics in our system. We have been making positive strides on this. This should reduce some of the pressure to change the block size. The difficulty is finding an algorithm that cannot be gamed, cannot be bought, and is sensitive to miners. You can get 2 out of 3 but not all 3.