Blog > 2020

Enter the Hydra: scaling distributed ledgers, the evidence-based way

Learn about Hydra: the multi-headed ledger protocol

26 March 2020 Prof Aggelos Kiayias 10 mins read

Enter the Hydra: scaling distributed ledgers, the evidence-based way

Scalability is the greatest challenge to blockchain adoption. By applying a principled, evidence-based approach, we have arrived at a solution for Cardano and networks similar to it: Hydra. Hydra is the culmination of extensive research, and a decisive step in enabling decentralized networks to securely scale to global requirements.

What is scalability and how do we measure it?

Scaling a distributed ledger system refers to the capability of providing high transaction throughput, low latency, and minimal storage per node. These properties have been repeatedly touted as critical for the successful deployment of blockchain protocols as part of real-world systems. In terms of throughput, the VISA network reportedly handles an average of 1,736 payment transactions per second (TPS) with the capability of handling up to 24,000 TPS and is frequently used as a baseline comparison. Transaction latency is clearly desired to be as low as possible, with the ultimate goal of appearing instantaneous to the end-user. Other applications of distributed ledgers have a wide range of different requirements in terms of these metrics. When designing a general purpose distributed ledger, it is natural to strive to excel on all three counts.

Deploying a system that provides satisfactory scaling for a certain use case requires an appropriate combination of two independent aspects: adopting a proper algorithmic design and deploying it over a suitable underlying hardware and network infrastructure.

When evaluating a particular algorithmic design, considering absolute numbers in terms of specific metrics can be misleading. The reason is that such absolute quantities must refer to a particular underlying hardware and network configuration which can blur the advantages and disadvantages of particular algorithms. Indeed, a poorly designed protocol may still perform well enough when deployed over superior hardware and networking.

For this reason, it is more insightful to evaluate the ability of a protocol to reach the physical limits of the underlying network and hardware. This can be achieved by comparing the protocol with simple strawman protocols, in which all the design elements have been stripped away. For instance, if we want to evaluate the overhead of an encryption algorithm, we can compare the communication performance of two end-points using encryption against their performance when they simply exchange unencrypted messages. In such an experiment, the absolute message-per-second rate is unimportant. The important conclusion is the relative overhead that is added by the encryption algorithm. Moreover, in case the overhead approximates 0 for some configuration of the experimental setup, we can conclude that the algorithm approximates the physical limits of the underlying network’s message-passing ability for that particular configuration, and is hence optimal in this sense.

Hydra – 30,000-feet view

Hydra is an off-chain scalability architecture for distributed ledgers, which addresses all three of the scalability challenges mentioned above: high transaction throughput, low latency, and minimal storage per node. While Hydra is being designed in conjunction with the Ouroboros protocol and the Cardano ledger, it may be employed over other systems as well, provided they share the necessary salient characteristics with Cardano.

Despite being an integrated system aimed at solving one problem – scalability – Hydra consists of several subprotocols. This is necessary as the Cardano ecosystem itself is heterogenous and consists of multiple entities with differing technical capabilities: the system supports block producers with associated stake pools, high-throughput wallets as used by exchanges, but also end-users with a wide variety of computational performance and availability characteristics. It is unrealistic to expect that a one-shoe-fits-all, single-protocol approach is sufficient to provide overall scalability for such a diverse set of network participants.

The Hydra scalability architecture can be divided into four components: the head protocol, the tail protocol, the cross-head-and-tail communication protocol, as well as a set of supporting protocols for routing, reconfiguration, and virtualization. The centerpiece is the 'head' protocol, which enables a set of high-performance and high-availability participants (such as stake pools) to very quickly process large numbers of transactions with minimal storage requirements by way of a multiparty state channel – a concept that generalizes two-party payment channels as implemented in the context of the Lightning network. It is complemented by the 'tail' protocol, which enables those high-performance participants to provide scalability for large numbers of end-users who may use the system from low-power devices, such as mobile phones, and who may be offline for extended periods of time. While heads and tails can already communicate via the Cardano mainchain, the cross-head-and-tail communication protocol provides an efficient off–chain variant of this functionality. All this is tied together by routing and configuration management, while virtualisation facilitates faster communication generalizing head and tail communication.

The Hydra head protocol

The Hydra head protocol is the first component of the Hydra architecture to be publicly released. It allows a set of participants to create an off-chain state channel (called a head) wherein they can run smart contracts (or process simpler transactions) among each other without interaction with the underlying blockchain in the optimistic case where all head participants adhere to the protocol. The state channel offers very fast settlement and high transaction throughput; furthermore, it requires very little storage, as the off-chain transaction history can be deleted as soon as its resulting state has been secured via an off–chain 'snapshot' operation.

Even in the pessimistic case where any number of participants misbehave, full safety is rigorously guaranteed. At any time, any participant can initiate the head's 'closure' with the effect that the head's state is transferred back to the (less efficient) blockchain. We emphasize that the execution of any smart contracts can be seamlessly continued on-chain. No funds can be generated off-chain, nor can any single, responsive head participant lose any funds.

The state channels implemented by Hydra are isomorphic in the sense that they make use of the same transaction format and contract code as the underlying blockchain: contracts can be directly moved back and forth between channels and the blockchain. Thus, state channels effectively yield parallel, off-chain ledger siblings. In other words, the ledger becomes multi-headed.

Transaction confirmation in the head is achieved in full concurrency by an asynchronous off-chain certification process using multi-signatures. This high level of parallelism is enabled by use of the extended UTxO model (EUTxO). Transaction dependencies in the EUTxO model are explicit, which allows for state updates without unnecessary sequentialization of transactions that are independent of each other.

Experimental validation of the Hydra head protocol

As a first step towards experimentally validating the performance of the Hydra head protocol, we implemented a simulation. The simulation is parameterized by the time required by individual actions (validating transactions, verifying signatures, etc.), and carries out a realistic and timing-correct simulation of a cluster of distributed nodes forming a head. This results in realistic transaction confirmation time and throughput calculations.

We see that a single Hydra head achieves up to roughly 1,000 TPS, so by running 1,000 heads in parallel (for example, one for each stake pool of the Shelley release), we should achieve a million TPS. That’s impressive and puts us miles ahead of the competition, but why should we stop there? 2,000 heads will give us 2 million TPS – and if someone demands a billion TPS, then we can tell them to just run a million heads. Furthermore, various performance improvements in the implementation can improve the 1,000 TPS single head measurement, further adding to the protocol’s hypothetical performance.

So, can we just reach any TPS number that we want? In theory the answer is a solid yes, and that points to a problem with the dominant usage of TPS as a metric to compare systems. While it is tempting to reduce the complexity of assessing protocol performance to a single number, in practice this leads to an oversimplification. Without further context, a TPS number is close to meaningless. In order to properly interpret it, and make comparisons, you should at least ask for the size of the cluster (which influences the communication overhead); its geographic distribution (which determines how much time it takes for information to transit through the system); how the quality of service (transaction confirmation times, providing data to end users) is impacted by a high rate of transactions; how large and complicated the transactions are (which has an impact on transaction validation times, message propagation time, requirements on the local storage system, and composition of the head participants); and what kind of hardware and network connections were used in the experiments. Changing the complexity of transactions alone can change the TPS by a factor of three, as can be seen in the figures in the paper (refer to Section 7 – Simulations).

Clearly, we need a better standard. Is the Hydra head protocol a good protocol design? What we need to ask is whether it reaches the physical limits of the network, not a mere TPS number. Thus, for this first iteration of the evaluation of the Hydra head protocol, we used the following approach to ensure that the data we provide is properly meaningful:

  1. We clearly list all the parameters that influence the simulation: transaction size, time to validate a single transaction, time needed for cryptographic operations, allocated bandwidth per node, cluster size and geographical distribution, and limits on the parallelism in which transactions can be issued. Without this controlled environment, it would be impossible to reproduce our numbers.
  2. We compare the protocol’s performance to baselines that provide precise and absolute limits of the underlying network and hardware infrastructure. How well we approach those limits tells us how much room there would be for further improvements. This follows the methodology explained above using the example of an encryption algorithm.

We use two baselines for Hydra. The first, Full Trust, is universal: it applies to any protocol that distributes transactions amongst nodes and insists that each node validate transactions one after the other – without even ensuring consensus. This yields a limit on TPS by simply adding the message delivery and validation times. How well we approach this limit tells us what price we are paying for consensus, without relying on comparison with other protocols. The second baseline, Hydra Unlimited, yields a TPS limit specifically for the head protocol and also provides the ideal latency and storage for any protocol. We achieve that by assuming that we can send enough transactions in parallel to completely amortize network round-trip times and that all actions can be carried out when needed, without resource contention. The baseline helps us answer the question of what can be achieved under ideal circumstances with the general design of Hydra (for a given set of values of the input parameters) as well as evaluate confirmation latency and storage overhead against any possible protocol. More details and graphs for those interested can be found in our paper (again, Section 7 – Simulations).

What comes next?

Solving the scalability question is the holy grail for the whole blockchain space. The time has come to apply a principled, evidence-based approach in designing and engineering blockchain scalability solutions. Comparing scalability proposals against well-defined baselines can be a significant aide in the design of such protocols. It provides solid evidence for the appropriateness of the design choices and ultimately leads to the engineering of effective and performant distributed ledger protocols that will provide the best possible absolute metrics for use cases of interest. While the Hydra head protocol is implemented and tested, we will, in time, release the rest of the Hydra components following the same principled approach.

As a last note, Hydra is the joint effort of a number of researchers, whom I'd like to thank. These include Manuel Chakravarty, Sandro Coretti, Matthias Fitzi, Peter Gaži, Philipp Kant, and Alexander Russel. The research was also supported, in part, by EU Project No.780477, PRIVILEDGE, which we gratefully acknowledge.

From Classic to Hydra: the implementations of Ouroboros explained

Ouroboros is the consensus protocol of Cardano. Here, we explain what it does and how it’s evolving

23 March 2020 Kieran Costello 9 mins read

From Classic to Hydra: the implementations of Ouroboros explained

With the recent BFT update to the Byron mainnet, and the freshly-published Hydra paper, you’ll probably have heard a lot about Ouroboros: the ground-breaking proof-of-stake consensus protocol used by Cardano. Developed as a more energy efficient and sustainable alternative to proof of work, upon which earlier cryptocurrencies – Bitcoin and, currently, Ethereum – are built, Ouroboros was the first blockchain consensus protocol to be developed through peer-reviewed research.

Led by Aggelos Kiayias at the University of Edinburgh, Ouroboros and its subsequent implementations – BFT, Praos, Genesis, Hydra – provide a new baseline to solve some of the world’s greatest challenges, securely and at scale.

Yet recognition begins with education, and we cannot rely on what a technology achieves to convey the how. In this article, we present an overview of the how of Ouroboros. We’ll examine the tangibles and cover what each implementation introduces, to further the community’s understanding of the protocol, and illustrate why it’s such a game changer. A detailed analysis of each implementation can be found in the corresponding whitepapers below. For a broad-stroke explanation of Ouroboros and its implementations, however, read on.

  • Ouroboros Classic
  • Ouroboros BFT
  • Ouroboros Genesis
  • Ouroboros Praos
  • Ouroboros Hydra

A word on consensus protocols, and why Ouroboros is different

It’s reasonable to assume that anybody new to the space might be confused by the term 'consensus protocol'. Put simply, a consensus protocol is the system of laws and parameters that govern the behavior of distributed ledgers: a ruleset by which each network participant plays.

Public blockchains aren’t controlled by any single, central authority. Instead, a consensus protocol is used to allow distributed network participants to agree on the history of the network captured on the blockchain – to reach consensus on what has happened, and continue from a single source of truth.

That single source of truth provides a single record. This is why blockchains are sometimes referred to as trustless: instead of requiring participants to trust one another, trust is built into the protocol. Unknown actors may interact and transact with each other without relying on an intermediary to mediate, or for there to be a prerequisite exchange of personal data.

Ouroboros is a proof-of-stake protocol, which is distinct from proof of work. Rather than relying on 'miners' to solve computationally complex equations to create new blocks – and rewarding the first to do so – proof of stake selects participants (in the case of Cardano, stake pools) to create new blocks based on the stake they control in the network.

Networks using Ouroboros are many times more energy efficient than those using proof of work – and, through Ouroboros, Cardano is able to achieve unparalleled energy efficiency. At the same level of decentralization – for example, 100 pools, which exceeds bitcoin’s current network – Cardano could consume as little as 0.01567GWh (gigawatt-hours) per year. Bitcoin, meanwhile, would require 67,000 GWh per year (according to current statistics). This is based on Ouroboros' ability to run on a Raspberry Pi, which has a power consumption of 15 to 18W (watts). In theory, this equates to more than four-million times the energy efficiency. The resulting difference in energy use can be analogized to that between a household and a small country: one can be scaled to the mass market; the other cannot.

Now, let’s take a closer look at how the Ouroboros protocol works, and what each new implementation adds.

Ouroboros Classic

We start with Ouroboros: the first implementation of the Ouroboros protocol, published in 2017. This first implementation (referred to as Ouroboros Classic) laid the foundations for the protocol as an energy-efficient rival to proof of work, introduced the mathematical framework to analyze proof of stake, and introduced a novel incentive mechanism to reward participants in a proof-of-stake setting.

More than this, however, what separated Ouroboros from other blockchain, and, specifically, proof-of-stake protocols was its ability to generate unbiased randomness in the protocol’s leader selection algorithm, and the subsequent security assurances that provided. Randomness prevents the formation of patterns, and is a critical part of maintaining the protocol’s security. Whenever a behavior can be predicted, it can be exploited – and though Ouroboros ensures transparency, it prevents coercion. Significantly, Ouroboros was the first blockchain protocol to be developed with this type of rigorous security analysis.

How Ouroboros works

A comprehensive explanation of how Ouroboros works can be found in its research paper. To summarize, Ouroboros divides the blockchain into slots and epochs. In Cardano, each slot lasts for 20 seconds and each epoch – which is an aggregation of slots – represents approximately five days’ worth of slots.

Central to Ouroboros’ design is the recognition that attacks are inevitable. As such, the protocol has tolerance built-in to prevent attackers from propagating alternative versions of the blockchain, and assumes that an adversary may send arbitrary messages to any participant at any time. In fact, the protocol is guaranteed to be secure so long as more than 51% of the stake is controlled by honest participants (that is, those following the protocol).

A slot leader is elected for each slot, who is responsible for adding a block to the chain and passing it to the next slot leader. To protect against adversarial attempts to subvert the protocol, each new slot leader is required to consider the last few blocks of the received chain as transient: only the chain that precedes the prespecified number of transient blocks is considered settled. This is also referred to as the settlement delay. Among other things, this means that a stakeholder can go offline and still be synced to the blockchain, so long as it’s not for more than the settlement delay.

Within the Ouroboros protocol, each network node stores a copy of the transaction mempool – where transactions are added if they are consistent with existing transactions – and the blockchain. The locally stored blockchain is replaced when the node becomes aware of an alternative, longer valid chain.

The drawback of Ouroboros Classic was that it was susceptible to adaptive attackers – a significant threat in a real-world setting that was resolved with Ouroboros Praos – and had no secure way for a new participant to bootstrap from the blockchain, which was resolved with Ouroboros Genesis.

Ouroboros BFT

Ouroboros BFT came next. Ouroboros BFT (Byzantine Fault Tolerance) is a simple protocol that was used by Cardano during the Byron reboot, which was the transition of the old Cardano codebase to the new. Ouroboros BFT will help prepare Cardano’s network for Shelley’s release and, with that, its decentralization.

Rather than requiring nodes to be online all of the time, Ouroboros BFT assumes a federated network of servers – the blockchain – and synchronous communication between the servers, providing ledger consensus in a simpler and more deterministic manner.

Additional benefits include instant proof of settlement, transaction settlement at network speed – which means the determiner for transactions is the speed of your network connection to a OBFT node – and instant confirmation in a single round trip of communication. Each of these results in significant performance improvements.

Ouroboros Praos

Ouroboros Praos builds upon – and provides substantial security and scalability improvements to – Ouroboros Classic.

As with Ouroboros Classic, Ouroboros Praos processes transaction blocks by dividing chains into slots, which are aggregated into epochs. Unlike Ouroboros Classic, however, Praos is analyzed in a semi-synchronous setting and is secure against adaptive attackers.

It assumes two possibilities: that adversaries can delay honest participant messages for longer than one slot, and that an adversary may send arbitrary messages to any participant at any time.

Through private-leader selection and forward-secure, key-evolving signatures, Praos ensures that a strong adversary cannot predict the next slot leader and launch a focused attack (such as a DDoS attack) to subvert the protocol. Praos is also able to tolerate adversarially-controlled message delivery delays and a gradual corruption of individual participants in an evolving stakeholder population, which is critical for maintaining network security in a global setting, provided that an honest majority of stake is maintained.

Ouroboros Genesis

Then, we have Ouroboros Genesis. Genesis further improves upon Ouroboros Praos by adding a novel chain selection rule, which enables parties to bootstrap from a genesis block – without, significantly, the need for trusted checkpoints or assumptions about past availability. Genesis also provides proof of the protocol’s Universal Composability, which demonstrates that the protocol can be composed with other protocols in arbitrary configurations in a real-world setting, without losing its security properties. This significantly contributes to its security and sustainability, and that of the networks using it.

Ouroboros Hydra

Last is Ouroboros Hydra. Hydra is an off-chain scalability architecture that addresses three key scalability challenges: high-transaction output, low latency, and minimal storage per node.

The recently released Hydra whitepaper proposes and outlines the introduction of multi-party state channels, which offers parallel transaction processing to dramatically improve Cardano’s transaction-per-second (TPS) output, and instant confirmation of transactions. Reflecting the implementation’s namesake, the paper refers to off-chain ledger siblings – state channels – as heads, which makes the ledger multi-headed.

Ouroboros Hydra enables Cardano to scale horizontally, increasing performance by incorporating additional nodes, rather than vertically, through the addition of more powerful hardware. Early simulations show that each head is able to perform up to 1,000 TPS. With 1,000 heads, this could be as high as 1,000,000 TPS. Once implemented, Ouroboros Hydra will allow Cardano to scale to unrivalled levels – to the level of, for example, global payment systems.

While Hydra is being designed in conjunction with the Ouroboros protocol and the Cardano ledger, it may also be employed over other systems, provided they share the necessary characteristics with Cardano.

The future of Ouroboros

Ouroboros, named after the symbol of infinity, is the backbone of the Cardano ecosystem. The protocol serves as a foundation and staging point for self-propagating systems that cyclically transform and grow, supplanting existing systems – financial and otherwise – and disintermediating the power structures upon which they rely. It is the beginning of a new standard, defined not from the center but, instead, from the margins.

Its future is as its past: a tireless effort to explore, iterate, and optimize, and drive positive change through rigorous research. Each step in its journey – after Hydra comes Ouroboros Crypsinous and Ouroboros Chronos – is a new evolution, and takes us closer to our vision of a fairer, securer, and more sustainable world.

Educating the world on Cardano: initiatives and plans for 2020

Learn more about the education team's plans for the upcoming year

27 February 2020 Niamh Ahern 6 mins read

Educating the world on Cardano: initiatives and plans for 2020

Education has always been a key part of IOHK’s strategy. Our mission is to grow our global community and business through the medium of education, and to share what we have learned. By claiming leadership in worldwide education on blockchain technology, we have the chance to shape the field for generations and to leave a lasting legacy.

A consistent theme from 2019 has been the demand for a broad range of educational content, as demonstrated by the feedback received about the Incentivized Testnet, as well as the steady flow of support requests to our helpdesk. A key focus in IOHK for 2020 is to develop and expand our education materials as we transition fully into the Shelley era and then to the Goguen era of Cardano.

The IOHK education team will be investing significant time and effort this year in broadening our range of materials. We aim to enhance understanding of our technologies using a variety of learning and training assets targeted at a wide range of stakeholder audiences, both internal and external. This will be vital as the use of IOHK technology moves into the mainstream. We also aim to provide knowledge and information to enterprise decision-makers so they know what business problems our technologies can solve. We have lots planned and many projects are underway as we grow Cardano into a global social and financial operating system.

What can you expect?

We started 2020 with lectures, by Dr Lars Brünjes, our director of education, at the University of Malta. The focus of these lectures was on Plutus and Marlowe, our programming languages for smart contracts. The fruits of these sessions will, in turn, form the foundation of some modular training materials that we plan to formalize and develop over the coming months.

Our free Udemy courses on Plutus and Marlowe by Alejandro Garcia have proven very popular, with over 5,000 students signed up. Feedback has been positive and, as a result of what we learned from our students, we’ve been making incremental improvements over the last year. We now want to take this to the next level and are planning to fully update both courses soon to bring them up to speed with the latest development changes and new features. We are also in the initial planning stages for a second edition of the ebook, Plutus: Writing reliable smart contracts by Lars Brünjes and Polina Vinogradova, which we will be publishing later this year. The writing team has started to identify improvements and we are also gathering feedback directly from readers. If you have suggestions, please raise a pull request in our Plutus ebook GitHub repository with your ideas.

An important step in bridging the gap between our academic papers and mainstream understanding of these concepts is to teach people about Ouroboros, the proof-of-stake protocol that powers Cardano and ada. In response to the valuable feedback we have received from running the Incentivized Testnet, we are planning to create varied educational content to help stake pool operators understand Ouroboros and how the protocol works on a practical level.

Broadening our reach

To broaden the reach of our training courses and content, we are also investigating a way to migrate our popular Haskell training course into a massive online course, or MOOC, while also making it more comprehensive with the inclusion of Plutus and Marlowe material. In this way, we hope our MOOC will make the course even more valuable, and provide access to the widest possible global community. In addition, we are planning a comprehensive classroom-based Haskell and Plutus course in Mongolia, details of which will be finalized soon. We plan to use the introductory part of the online Haskell course as a primer for this face-to-face training. This is an example of a core efficiency that we are embracing where we aim to reuse content on Haskell, Plutus, and Marlowe across a variety of stand-alone modular materials that we can use externally and within the company for developing our staff.

We appreciate the value of interactive and meaningful training workshops, so we intend to host many more this year in several locations around the world. These events are in the initial planning stages and the first in the series will take place in Quebec in the spring. We’ll announce more details through our official channels – Twitter, email, here – nearer the time. The IOHK education team are on hand to support and prepare the necessary learning tools for participants to use at these events.

Alongside these materials and courses, we are mentoring an undergraduate student at the International University of Management (ISM), with her thesis on the topic of the power of blockchain in emerging markets. Additionally, Dr Jamie Gabbay has been invited to contribute to the book 'Applications of new generation technology to cryptocurrencies, banking, and finance’ by Devraj Basu.

Internal initiatives

We are also working with our human resources team to build the IOHK Training Academy: a new learning portal for our internal teams to upskill and develop professionally. This new resource is part of our learning and development strategy that aims to improve employee engagement, satisfaction, and retention. We want to provide access to a library of assets so our staff can easily find exactly what they need. We will be developing tailored ‘learning journeys’ by function, ready-made content that will help people develop skills in new areas, as well as creating specific onboarding journeys for new starters. This is a vital resource for a fast-growing company with staff and contractors spread across 43 countries and will prove to be an important asset for all our people.

2020 is going to be a pivotal year for Cardano and we are looking forward to playing our part. It is our aim to teach both individuals and organizations how to use the protocol, and how it can help with their everyday lives. We have lots to do and we look forward to sharing all the educational content that we produce with our existing community, as well as those of you who are new to Cardano.

Community and stake pool reactions to the Shelley Incentivized Testnet

Two months after launch, we look at the reactions so far

20 February 2020 Anthony Quinn 7 mins read

Community and stake pool reactions to the Shelley Incentivized Testnet

The Incentivized Testnet (ITN) has been running since mid-December, and the results have produced some fascinating insights into stake pools and a steep learning curve for the blockchain engineers at IOHK, as well as the companies and individuals setting up stake pools, and ada owners. The strategy of using a fast development team writing in the Rust language to act as pathfinders for the heavyweight Haskell developers looks to be paying off. IOHK now has an enormous amount of information about the use — and misuse — of the protocol to take to the next stage: the Haskell testnet. Alongside that, the Cardano community has shown what it is capable of — supporting, experimenting, and providing solid feedback throughout.

Before the ITN went live on December 13, 158 stake pools had registered with the Cardano Foundation and were setting themselves up. Yet, within three days, the number of pools had shot up to 325. By the end of January, the total was well past the 600 mark. There had been some scepticism when IOHK chief Charles Hoskinson talked of 1,000 stake pools last year, but we’re well on the way to that total.

As Scott Darby’s world of stake pools animation shows, the nodes are spread from Brazil to South Africa and Australia; from Japan and China to San Francisco via Europe — and, with nodes in Bodø and Fauske in Norway, we’re even in the Arctic Circle.

Many from the crypto press remarked on the fast results: CryptoSlate pointed out that the testnet had 10 times more pools than Eos or Tron within a week. NewsBTC summed it up with the headline: ‘Cardano testnet success shows how decentralization should work.’ The headlines, of course, don’t tell the whole story, and there were plenty of bumps in the road. But it’s going well, and we’ve received positive feedback about the improvements made to date (with more to come). That said, the network’s success isn’t just about what we do: it’s about the work of stake pool operators. Here, we take a look at the stake pools bringing this decentralized network to life, and explore the business of running a stake pool.

Stake pool tools

Thanks to the efforts of the Cardano community, anyone can delve into the workings of the system and explore what is happening. AdaPools, run by the Cardanians group alongside its pool, has a dashboard based on data from IOHK’s GitHub registry with tools such as a mapping of decentralization, notifications of saturation, and a test for whether a pool is forked and off the main blockchain. Cardano Pool Tool run by StakeLove is based around a table that can rank staking providers by 16 measures, from pool name to ada staked to return on investment.

The information shown in these tools comes from the blockchain data. Beyond that, the decentralized nature of the blockchain means we cannot know the identity of stake pool operators until they reveal themselves through their pool’s website or social media. So, the biggest pool early on, with 737m ada staked — twice as much as any of the IOHK pools — had ZZZ as its ticker but, initially, its name was simply its identity extracted from the blockchain. ZZZ soon split itself into several pools and revealed its name as TripleZ, based in Japan.

People staking their ada may prefer to know more about who’s running their pool, but they might not. This is one of the things that IOHK — and ada holders because Cardano is going to become their network once it’s decentralized — will get a feel for from the ITN. There are various forums where all this is being discussed, such as on Telegram and the Cardano forums. It’s been fascinating to see the debate inspired by the testnet, much of which reflects debates within IOHK about how best to build a community-driven, decentralized network and the role that incentives should play. The balance between community contribution and personal profit motive has been discussed at length. So, too, has how much the community should police itself. This is new territory, and, through the community, we’re able to test our assumptions about how blockchain social dynamics play out, and to what extent the protocol should be responsible for preventing adversarial behavior.

Community and operator reactions

Alongside the technical learnings, gathering community feedback and input has been an essential part of the Incentivized Testnet, to help us on the journey to deploying Shelley on the mainnet. Even before stake pools had set up their nodes to join the testnet, users began to provide feedback and have their say. Max, a Cardano ambassador, ran three ‘What the pool?’ interviews in the run-up to the testnet launch on his Gerolamo blog, and has since added a fourth. The Cardano Effect also interviewed four operators. Another website, Stake Pool Showcase, asked five standard questions and encourages pool owners to sign up and make their case:

  • Who operates the pool?
  • What is your history with the Cardano project?
  • What is the setup of your pool?
  • What are your plans for the future of your pool?
  • Why should people delegate to your pool?

The answers demonstrate a range of operators. In terms of size of stake, the nine listed by February ranged from 1 ada to 50 million ada. Eight of the pools were run by one or two people who worked in computing and most dated their involvement with Cardano back to 2017. Three did not give their names, one stating: ‘The pool is run in an anonymous fashion, in order to make it impossible to influence me. This is part of the security, to make it much harder to attack the pool.’ They were in places such as France, Honolulu, London, Manchester, and Norway.

As well as giving information about their experience, most listed their hardware set-up and seemed to know what to expect from a testnet: ‘Of course, within the testnet the pool can only run as stable as the software stability allows, but I will do my best — and, moving forward, code stability will improve for sure.’ Another said: ‘We have been tinkering with the settings all the time and have achieved very good uptime in the last few epochs — after a lot of lost sleep.’

One operator was sensitive to the power expenditure of running cryptocurrencies: ‘Overall, I am very pleased I still only draw 35-45 watts in day-to-day operations, so it's eco-friendly.’ A second was running a backup server on a Rock Pi single board computer, which uses as little as 10W, as demonstrated at last year’s IOHK Summit. Looking beyond the testnet, another pool operator raised the challenge of governance in the Voltaire era of development and saw smart contracts as the way forward: ‘We have Marlowe for a financial DSL [domain-specific language], why not a legal DSL to help with governance issues?’

The Cardano Shelley Testnet & StakePool Best Practice Workgroup on Telegram received several mentions as the place for operators to go for tips.

All in all, as Kyle Solomon at AdaFrog told this blog: ‘Being a stake pool operator has been both a highly challenging and amazingly fulfilling journey. The most important takeaways I’ve learned as a pool operator are: first, that the protocol is very close to a production quality that achieves IOHK’s original goals for Cardano; and second that the Cardano community is utterly and hands-down amazing. Even though we compete amongst each other, every pool operator is eager to help one another.’

The next post in this three-part series will delve deeper into the experiences of the stake pools and what’s been learnt.

As with everything IOHK does, we cannot give advice on how you use your ada and we’re not recommending any of these pools. As always, though, please keep getting in touch and let us know your thoughts.

New Cardano node, explorer backend, and web API released

We’ve refreshed Cardano’s architecture – with more yet to come

12 February 2020 Tim Harrison 4 mins read

New Cardano node, explorer backend, and web API released

Today marks the culmination of considerable effort by the Cardano team: the release of a new Cardano Haskell implementation. This implementation consists of two main components: the Cardano Node and the Cardano Explorer Backend and Web API. Over the past 18 months, we’ve been building a new architectural foundation that will not only prepare us for the upcoming releases for Shelley – and, thereafter, Goguen – but open the door to third-party developers and enterprise adoption.

The changes will begin with the Ouroboros update to Ouroboros BFT (Byzantine Fault Tolerance), which is tentatively scheduled for February 20. For now, Cardano’s blockchain production remains on the old implementation. After the update to Ouroboros BFT, we will be able to migrate the core nodes that create blocks, while Daedalus users will be able to upgrade later, once the compatible wallet backend is available.

Why now?

The original implementation of the network node – launched in September 2017 – has taken us as far as it could. We’ve known for a long time that a new architecture is needed to achieve our roadmap, ready the system for Shelley, and provide a foundation for Goguen, as well as other future releases.

This update is about radically improving Cardano’s design, and is the first to take advantage of our work on formal methods. While the old node was monolithic – with components like the wallet backend and explorer built in – the new version is modular. This makes future integrations easier and allows the node to be more readily incorporated into other systems, such as those used by exchanges. In the new architecture, the node, wallet, and explorer exist as separate components (a new wallet backend will soon be released).

What’s involved?

A significant achievement of this new implementation is the separation of the consensus layer and ledger rules. This decoupling means we are able to change the ledger rules without making changes to (or risk breaking) consensus. Following from this, when we transition into Shelley to Goguen, only the ledger rules will change. This will allow us to execute deployments more efficiently and add new features more frequently. We’ll have less to validate and test, while supporting more efficient development.

Some benefits will be immediate, and others will be realized over time. The direct benefits are that IOHK engineers will be able to innovate more easily and make changes to specific components without necessarily impacting others. The new implementation, coupled with the update to Ouroboros BFT, will also lead to significant TPS (transactions per second) performance improvements. For end-users, the benefits of this update will be cumulative, as the Cardano network profits from greater developmental support and system adaptability and portability.

This new implementation is the result of a lot of hard work. Now, we start to see the benefits of our commitment to formal methods, delivering a network that can not only scale, but remain stable while doing so. The new codebase has had substantial – and ongoing – testing, and we’ve been able to make a number of fundamental improvements without inheriting the shortcomings of the old codebase.

The new Cardano node also features an IPC interface that can be used by multiple client components, including wallets, explorers, CLI tools, and custom integration APIs and tools. This isn’t only about us being able to develop better-performing systems and applications, but others being able to as well.

Cardano Explorer Backend and Web API

The Cardano Explorer Backend and Web API is the new explorer backend and web API for the Cardano Node. It has been completely rewritten compared to the previous cardano-sl explorer. It has a new modular design and consists of the following components: Cardano Explorer Node, PostgreSQL database, and Cardano Explorer Web API.

  • The cardano-explorer-node is a client of the Cardano node. It synchronizes Byron chain data into the PostgreSQL database. The PostgreSQL database schema is a stable public interface and can be used directly for queries.
  • The cardano-explorer web API is a REST API server that reads data from the PostgreSQL database. It is compatible with the old cardano-sl explorer HTTP API and old web frontend.

For more information, see the release notes and documentation linked therein.

This release is about preparing Cardano for what’s to come, and ensuring we have the architecture and network apparatus in place to scale, remain agile, and allow for the necessary interoperability, interactivity, and ease-of-use that industry use-cases require.

For the latest Cardano updates, visit the Cardano forum or follow us on Twitter – and stay tuned for more information on the new wallet backend.

1

2