Blog > 2021 > October > Cardano: robust, resilient – and flexible

Cardano: robust, resilient – and flexible

With its modular, parameter-based approach, Cardano has been architected with true scalability in mind

21 October 2021 Kevin Hammond 10 mins read

Cardano: robust, resilient – and flexible

Cardano is designed to serve millions of users in a globally distributed way. As with any other decentralized blockchain, this means that we need to produce a predictable and consistent supply of new blocks that collectively grow the chain and transparently record transactions between users. In order to ensure that new blocks are propagated across the network as a whole in an effective and secure way, it is important that the system consumes computation, memory, storage, and network resources efficiently.

Flexibility is key, so an important feature of the Cardano protocol is that it has been architected with true scalability in mind. This isn't just about the longer-term ability to provide the infrastructure into a truly global, fully decentralized operating system; its parameterization approach is also designed to flex and adjust to pricing fluctuations, network saturation, or increased demand, for example. A number of protocol parameters are provided that allow the system behavior to be tuned without the need for a hard fork. Even then, more significant upgrades that do require this can be deftly managed using our hard fork combinator technology (HFC). Together these are significant differentiators for Cardano which give us robustness and reliability today, and highly agile upgrade paths as the network grows and usage evolves.

Cardano’s roadmap was also conceived in a series of stages that would take us step by step toward our ultimate destination. Byron was about basic transactional capability within a federated network. This gave us the ability to start building a community and partnerships while working on the next stage. The Byron reboot gave us the firm foundations to build out further capability, while Shelley introduced stake pools, further expanding the community and introducing 100% decentralized block production.

This year, we have introduced a number of new, highly-anticipated features. Since early 2021, with the Mary era, Cardano has supported multi-assets and non-fungible token (NFT) creation on the ledger. With low fees and no need for smart contracts, we have seen an explosion of activity in this exciting area. September’s Alonzo upgrade has brought support for Plutus smart contracts that enable the development of a wide range of decentralized applications (DApps). It's relatively early days for smart contracts, but with dozens of projects working on DApps and a number getting close to the deployment stage, things will soon start to accelerate. These new capabilities influence how the ledger processes new scripts and transactions, and place new demands on the available resources. As activity grows, our architecture will allow us the agility to flex and adapt as required.

Network capacity

Networking lies at the heart of all Cardano operations. The Cardano network distributes transactions and blocks across globally distributed nodes that produce and verify the blockchain. This is called data diffusion, and it is essential to provide the needed information to nodes for the consensus algorithm to make its decisions. These decisions drive the chain forward, as a consensus between the nodes ensures that all transactions are verified, validated and thus can be transparently included in a new block.

Cardano is based on the decentralized Ouroboros Praos consensus protocol. Cardano smoothly transitioned to Praos from the previous federated Ouroboros Classic protocol via a series of changes to a protocol parameter d. Ouroboros Praos establishes enhanced security guarantees and has been delivered with peer-reviewed research papers presented in top-tier cybersecurity and cryptography conferences and journals.

Networking performance impacts how fast the system works as a whole. This includes such measures:

  • throughput (volume of data transferred)
  • timeliness (the block adoption time)

These two requirements are in tension with each other. We can maximize throughput when the generated blocks are most efficiently used. This, in turn, implies sufficient buffering to hide latency, which mitigates the consequences of a globally distributed system.

More buffering can often imply better block (and network) utilization, but it comes at the cost of increased delay (time to adoption in the chain) when the system is heavily saturated.

Block budget

To understand how fast transactions and scripts can be executed on Cardano, we should first define the notion of the block budget. The overall size of a block is currently limited to a maximum of 64 KB, representing a balance between ensuring good network utilization and minimizing transaction latencies. A single block may contain a mixture of transactions, including ones with Plutus scripts (smart contracts), native tokens, metadata, and simple ada transactions (payments). Similarly, a single transaction is currently limited to a maximum of 16KB. This ensures that a single block will always contain multiple transactions (at least 4, but generally many more), so improving the overall transaction throughput.

Block time budget is another property that is a fixed amount of time available to process all the transactions included in a single block. This is divided between the time that can be used for Plutus script execution and the time that is available for executing other transactions. This property ensures that transactions with Plutus scripts cannot monopolize the available time budget, and it will always be possible for the system to process simple payments in the same block that contains Plutus scripts. The total time budget for producing each block (including networking costs) is set to 1 second, with a budget of approximately 50 milliseconds available for Plutus script execution. In practice, this is a generous allowance – our benchmarking has shown that many real scripts will execute in 1 millisecond or less on a reference system.

The block time budget is currently set to 1 second. For security reasons, the Praos consensus protocol selects only a small fraction (one in 20) of the blocks that could potentially be added to the chain. For the current protocol parameters, the maximum transaction throughput (for simple transactions) is then approximately 11 transactions per second (TPS). Obviously, different transactions will vary in size and have different effective payloads. A single transaction could finalize an entire Catalyst voting round, for example, transferring millions of dollars of value.

As discussed above, each block is filled with a number of transactions that have been submitted by end users from wallets, the command-line interface (CLI), etc. These transactions are kept in a temporary in-memory holding area (the mempool) until they are ready to be processed and included in a block. Pending transactions are removed from the mempool as a block is minted, and new transactions can then be added to the mempool. By using a fixed-size mempool, we avoid the possibility of nodes being overloaded during high-demand periods, but this means that it may be necessary for a wallet or application to re-submit transactions. The mempool size is currently set to 128 KB: twice the current block size. This has been chosen based on queuing models.

Stretching the network

Ouroboros is designed to handle a large volume of data as well as transactions and scripts of different complexity and size. At present, and with current parameters, the Cardano network is utilizing on average only around 25% of its capacity. Of course, the most efficient scenario is that Cardano runs at or near 100% of its capacity (the network is saturated). While many networking solutions would suffer under such conditions, both Ouroboros and the Cardano network stack have been designed to be fair and highly resilient even under heavy saturation. Benchmarking analysis shows that under 200% saturation, the overall performance is still resilient and there are no network failures. Even while stress testing under 44x, the total available network capacity also shows no failures (though some transactions may be slightly delayed). The network is designed to work this way, using backpressure to manage the overall system load. So while some individual users taking part in a large NFT drop may experience longer wait times for their transactions, for example, or may need to resubmit the occasional transaction from a large batch (or spread the drops over a longer time period), this does not mean that the network is ‘struggling’. It actually means the network is performing as intended.

Wallets

Wallets act on behalf of end-users to submit payments and other transactions to the blockchain, and to track the blockchain status. One of the key services that a wallet provides is to submit transactions on the user’s behalf, confirm that they have been accepted onto the blockchain, and retry them on their behalf if the submission has not succeeded. That is, the wallet should take into account the effects of backpressure in the network as it becomes saturated, as well as other network effects (temporary disconnection, possible chain forks, etc). Wallets may be either:

  • Full-node wallets (as Daedalus), which use local computing and network resources to run a node that connects directly to the Cardano network.
  • Light wallets: these, in contrast, use shared computing and networking resources to serve a number of end users.

During periods of high demand (e.g., an NFT sale), both types of wallets may need to retry transactions. Since they share resources among many users, light wallets may need to temporarily scale the available computing and networking resources (including replicating endpoints) to ensure that user demand can be met. This demand-scaling is similar to the requirements that are placed when a company releases a popular new product, for example. In contrast, full node wallets may be essentially unaffected. Transactions may be slightly delayed, but each wallet will have the dedicated resources that are needed to retry the submission, including its own network connections. Similar principles apply to DApp providers – where specific network endpoints are provided, the system resources should be scaled to meet the demand.

Process optimization

We naturally welcome the innovation (and the dialog) that we are currently seeing in the NFT community. To improve the user experience, it is necessary to optimize development procedures so that the process of NFT creation, for example, works well even when causing system saturation. Many NFT creators are using batch minting for greater efficiency, for example.

We would encourage creators to look at how they can continue to optimize their own efforts in order to minimize network congestion. We’d also encourage everyone to join the Discord discussions as part of our Creator community and we’re making our engineers available in order to find the best matching solution to a particular case.

As well as the flexibility afforded by parameter adjustments – which can be made within an epoch if required – in the medium and longer term, further options will come into play. Hydra allows multiple operations to be run in parallel, which grants enhanced scalability. Its state-channel solutions increase the system throughput, also reducing the demand for on-chain execution. However, while Hydra helps with multiple scalability use cases, it doesn’t specifically address NFT creation efficiency. As Cardano continues to mature and grow, we will continue to look at how we optimize the network and manage the network capacity. As I recently talked about in our October mid-month update, as the network starts running at a higher capacity, we’ll be able to tune those Cardano parameters as needed. For example, reduce the block time budget, optimize the size and execution time of Plutus scripts or lower their execution cost and improve throughput.

Join our Discord community today to find out more and to discuss all things Cardano with our dedicated community.

Thanks to Neil Davies and Olga Hryniuk for their additional contributions and support in writing this post.