Papers Relating to Bitcoin and Related Subjects in Law: Part IX
This article was first published on Dr. Craig Wright’s blogand we republished with permission from the author. Read Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7 and Part 8.
The ability to scale a blockchain is related to the problem of scaling a distributed database. For example, the original ledger used in Bitcoin was a key-value database known for its ability to scale to high transaction volumes. The system was modified to ensure that the nodes ran on smaller systems such as a Raspberry Pie. Nevertheless, the use of databases such as LevelDB, higher levels of interoperability, and key-value databases have been extended to run with GPU-based acceleration (Iliakis et al., 2022).
Other research has focused on providing high availability and fault tolerance properties analogous to those within NoSQL and NewSQL (Zhou et al., 2021). Angelis and Ribeiro da Silva (2019) examine blockchain technologies regarding extended use beyond financial instruments. In such an analysis, it is necessary to remember that scale is a critical component of adoption. Therefore, in analyzing the growth and scalability of blockchain-based solutions, it is necessary to understand both the value drivers and the ability to create scalable systems.
Annotated bibliography
Angelis, J., & Ribeiro da Silva, E. (2019). Blockchain adoption: A value driver perspective. Business Horizons, 62(3), 307–314. https://doi.org/10.1016/j.bushor.2018.12.001
The authors present a discussion of applications related to blockchain technology outside the production of financial systems. The approach analyzes the concept of a blockchain, creating an analysis of the consensus and scalability of a system based on a variety of digital signatures and allowing the distribution of a variety of tokens, including those used in monetary exchanges. The primary principles of a blockchain documented by the authors include deploying a system that is “highly transparent, secure, immutable and decentralized” (Angelis & Ribeiro da Silva, 2019, p. 308).
The paper extends to analyzing blockchain maturity, and provides examples of blockchain versions 1.0, 2.0 and 3.0. It focuses on digital cash in the first instance, including privacy and ‘smart contracts’ in the second version, and the development of ‘decentralized applications’ in the third. Unfortunately, the authors did not notice the scripting language in Bitcoin and failed to connect it to the ability to produce both “smart contracts” and dApps when Bitcoin was first launched. Consequently, determining decay levels is problematic.
The authors then branch out into an analysis of the underlying logic behind value propositions associated with blockchains. The analysis connects artificial intelligence, and claims that blockchain 4.0 will be in the merging of technologies. The implications lead to decentralized artificial intelligence and the creation of automated systems. Nevertheless, while the authors note the problems with the existing “hype cycle” (2019, p. 311), they present a paper that introduces many of the same dilemmas and focuses on overhyped technologies, including artificial intelligence, rather than other technologies that have been noted in the paper, including ERP (2019, p. 312).
Iliakis, K., Koliogeorgi, K., Litke, A., Varvarigou, T., & Soudris, D. (2022). GPU-accelerated blockchain over key-value database transactions. IET Blockchain, 2(1), 1–12. https://doi.org/10.1049/blc2.12011
Iliakis et al. (2022) analyze blockchain and distributed ledger technologies as they apply to Internet of Things (IoT), finance, supply chain management and ERP applications. The paper provides an introductory analysis of blockchain technology, noting the use of GPUs, FPGAs and ASICs to solve the hash puzzle associated with the block reward. It is noted that most existing blockchain-based systems use a NoSQL database. The authors compare the performance of LevelDB (a commonly deployed alternative within blockchain systems) with an alternative GPU-based key-value store referred to as MegaKV.
The authors claim that this presents a hybrid CPU-and-GPU system, and that the integration of both accelerates the analysis and storage of transactions in the distributed ledger. The authors analyze directed acyclic structures (DAGs), side chains and sharding-based solutions to demonstrate this. The belief that changing the consensus protocol improves performance and scalability is based on the argument that proof-of-work in Bitcoin is inefficient. However, the work fails to note the separation of the transaction data and the hashed block header. Despite this, the work on performance-optimized distribution of key-value stores is valuable.
Analyzing such options, the authors summarize the state-of-the-art in-memory key-value store technology and compare it to LevelDB deployments. The analysis modeling GPU execution through megaKV demonstrates a potential to decompose the database into multiple formats, and provides a methodology the authors used to emulate blockchain-like transactions. While the analysis was based on an experimental pulley system, the authors demonstrated the potential for scaling solutions based on GPU-based database accelerators. Nevertheless, the authors still conclude that the main advantages of a blockchain are based on anonymity and decentralization, without documenting the purpose behind each tool.
Zhou, J., Xu, M., Shraer, A., Namasivayam, B., Miller, A., Tschannen, E., Atherton, S., Beamon, AJ, Sears, R., Leach, J., Rosenthal , D., Dong, X., Wilson, W., Collins, B., Scherer, D., Grieser, A., Liu, Y., Moore, A., Muppana, B., … Yadav, V. ( 2021). FoundationDB: A distributed unbound transactional key-value store. Proceedings of the 2021 International Conference on Management of Data, 2653-2666. https://doi.org/10.1145/3448016.3457559
Zhou et al. (2021) present an alternative key-value database option that integrates the scalability and flexibility of NoSQL with ACID transaction types deployed in NewSQl. The paper documents existing and competing alternatives to this form of database, and examines the core design principles associated with creating scalable data infrastructures. The architecture analysis includes the design of the system interface and the overall system architecture, from the control plane to the read/write structures.
The analysis models replication and read/write processes, providing a suitable instrument for documenting database design. The analysis extends into geo-replication and failover systems and necessary inclusions for system optimization. The section on optimization and scalability testing documents experiences and the problems that can be associated with analyzing large database deployments. Most critically, the authors provide a framework that can be used to analyze other databases.
The most beneficial aspect of the article lies in the methodology and system used to measure and capture various metrics related to databases. For example, the authors measure the lag from storage servers to log servers, capture proxy redo time, integrate a variety of metrics related to geolocation across multiple data centers, and provide simulations and metrics for read/write operations against client read and commit requests. Still, as the authors note (2021, p. 2661), limitations exist and “the simulation is not able to reliably detect performance issues, such as an imperfect load balancing algorithm” or third-party libraries not implemented in the flow.
This article has been lightly edited for clarity.
See: Re-inventing Business with Blockchain
New to Bitcoin? Check out CoinGeeks Bitcoin for beginners section, the ultimate resource guide for learning more about Bitcoin – as originally envisioned by Satoshi Nakamoto – and blockchain.