Blog > 2018

Preventing Sybil attacks

29 October 2018 Lars Brünjes 8 mins read

Preventing Sybil attacks

Building on last week’s post by Professor Aggelos Kiayias, IOHK’s chief scientist, I want to use this post to discuss another choice we made when designing Cardano’s reward mechanism. The mechanism is designed to give an incentive to stakeholders to ‘do the right thing’ and participate in the protocol in a way that ensures its smooth, efficient and secure operation. As was explained last week, to ensure fairness and decentralization, the rewards mechanism follows three principles:

  • Total rewards for a stake pool should be proportional to the size of the pool until the pool reaches saturation.
  • Rewards inside a pool should be proportional to a pool member’s stake.
  • Pool operators should get higher rewards for their efforts.

One necessary modification deals with pool performance. If a pool operator neglects his ‘duties’ and does not create the blocks he is supposed to create, the pool rewards will decrease accordingly.

Take the example of Alice and Bob who run pools of equal sizes. They are both elected as slot leaders with 100 slots each. Alice dutifully creates all 100 blocks in the 100 slots she leads, whereas Bob misses 20 slots and only creates 80 blocks. In this case, Alice’s pool will get full rewards, whereas Bob’s pool will get less. How much less exactly is controlled by a parameter.

The challenge

In this post, I want to concentrate on another potential challenge to the Cardano principles and explain how we decided to overcome it. The challenge was mentioned at the end of last week’s post: How do we prevent one person from creating dozens or even hundreds of small pools?

Note that for very large stakeholders it is perfectly legitimate to split their stake into several pools to get a fair share of the rewards.

An example of a Sybil attack

Let’s assume that we are aiming for 100 pools and therefore cap rewards at 1%. Let us further assume that Alice holds a 3.6% stake. If Alice does not split her stake, she will only get 1% of total rewards. If, however, Alice splits her stake, putting 0.9% into four different pools, her reward from each pool will not be capped.

The challenge arises if a small but devious stakeholder is allowed to create a large number of pools (possibly under assumed identities). If he manages to attract people to these pools (for example by lying about his costs and promising high rewards to pool members), he might end up controlling a majority stake with very little personal stake in the system. How could this happen?

Let’s imagine that there are about 100 legitimate, honest pools. If we didn’t guard against it, a malicious player could relatively cheaply create 100, 200 or even 500 pools under false names and claim low operational costs and a low profit margin. Many honest stakeholders would then be tempted to stop delegating to one of the 100 honest pools and instead delegate their stake to one of themalicious pools, which might outnumber the honest pools. As a consequence, the operator of those malicious pools would be selected slot leader for a majority of blocks and so gain control over the blockchain, opening it up to all kinds of mischief and criminal activities, such as double-spending attacks! He would, of course, have to pay for the operation of hundreds of nodes, but that cost pales in comparison with the cost of acquiring a majority stake by buying the majority of all the Ada in existence, which would be in the range of hundreds of millions to billions of dollars.

This would be disastrous because the security of a proof-of-stake system like Cardano relies on the idea that people with a lot of influence over the system should hold a lot of stake and therefore have every reason to help the system run smoothly.

Our solution

This type of attack, where the attacker assumes many identities, is called a Sybil attack, named after the 1973 novel Sybil by Flora Rheta Schreiber about a woman suffering from multiple personality disorder.

How can we prevent Sybil attacks?

One idea might be to make pool registration very expensive. But to prevent attacks, such fees would need to be extremely high and would prevent honest people from creating legitimate pools. Such a hurdle would be bad for decentralization; we want to encourage members of our community to start their own pools and not hinder their entry! There does have to be a modest fee for the simple reason that each registration certificate has to be stored on the blockchain and will consume resources, which have to be paid for.

Our game theoretical analysis led us to a different solution, one that won’t bar ‘small’ stakeholders from starting their own pools by burdening them with prohibitively high fees and a high financial risk.

When registering a pool, the pool operator can decide to ‘pledge’ some of his personal stake to the pool. Pledging more will slightly increase the potential rewards of his pool.

This means that pools whose operators have pledged a lot of stake will be a little bit more attractive. So, if an attacker wants to create dozens of pools, he will have to split his personal stake into many parts, making all of his many pools less attractive, thereby causing people to delegate to pools run by honest stakeholders instead.

In other words, an attacker who creates a large number of pools will have to spread himself too thinly. He can’t make all of his many pools attractive, because he has to split his stake into too many parts. Honest pool operators will bundle all their personal stake into their one pool, thus having a much better chance of attracting members.

The degree of influence a pool operator’s pledged stake has on pool rewards can be fine-tuned by a configurable parameter. Being a bunch of mathematicians with little imagination, we called this parameter ‘a0’. (A colleague suggested the Greek letter phi because it sounds like part of the nasty giant’s chant in Jack and the Beanstalk – ‘Fee-fo-fi-fum’ – and we’re trying to ward off harmful stake pool giants, but we’d be grateful to any member of the community who can come up with a good name!).

Setting a0 to zero would mean: ‘Pool rewards do not depend on the operator’s pledged stake.’ Picking a high value for a0 would result in a strong advantage for pool operators who pledge a lot of stake to their pools.

We have a classical trade-off here, between fairness and an even playing field on the one hand (a0 = 0) and security and Sybil-attack protection on the other hand (a0 is large).

To demonstrate the effect of a0, let’s look at the three graphs in Figure 1.

Figure 1. How a pool operator’s pledged stake affects pool rewards.

In the graphs, we are aiming for ten pools, so rewards will be capped at 10%. The size of the pool stake is plotted on the horizontal axis and the vertical axis shows pool rewards. Each graph depicts three hypothetical pools, where the operators have pledged 0%, 5% and 8% respectively to their pools (the pledged amount is called s in the graphs).

The first graph uses a0 = 0, so the pledged stake has no influence on pool rewards, and the three pools behave in the same way: rewards keep climbing as the pool size grows until they are capped when the pool controls 10% of the stake.

In the second graph, we see the effect of a0 = 0.1. The three pools are still similar, especially for small sizes, but they are capped at slightly different values. Pools with more pledged stake enjoy slightly higher rewards when they grow bigger.

Finally, the third graph shows the effect of a0 = 0.5. It is similar to the second graph, but the differences between the three pools are more pronounced. We still have to choose a “good” value for a0. This choice will depend on quantities such as expected operational pool costs, total rewards and – most importantly – the desired level of security.

We will want to keep a0 as small as possible, while still guaranteeing high levels of security against Sybil attacks.

In any case, it is important to keep in mind that the introduction of a0 does not prevent ‘small’ stakeholders from running successful pools because somebody with a great idea can always reach out to the community, convince others and invite them to work together and pool resources to pledge to the pool. In the end, running a solid, reliable pool and working closely with the community will be more important than just owning a lot of stake.

We have also started thinking about replacing the dependency of rewards on the pool leader’s stake with a reputation system. This would allow people with little stake to make their pools more attractive by running their pools reliably and efficiently over a long period of time. This won’t be implemented in the first iteration, but is on the table for future versions of Cardano.

You might also like to read the IOHK technical report ‘Design Specification for Delegation and Incentives in Cardano’ for a broader, more detailed description of the system.

On Monday, 5 November, IOHK will hold an AMA (Ask Me Anything) on staking in Cardano, where anyone will have the opportunity to put questions to the IOHK team. Details of the AMA will be announced soon.

Artwork, Creative Commons Mike Beeple

Stake pools in Cardano

IOHK’s chief scientist introduces staking

23 October 2018 Prof Aggelos Kiayias 17 mins read

Stake pools in Cardano

In a proof of stake (PoS) blockchain protocol, the ledger is maintained by the stakeholders that hold assets in that ledger. This allows PoS blockchains to use less energy compared with proof of work (PoW) or other types of blockchain protocols. Nevertheless, this requirement imposes a burden on stakeholders. It requires a good number of them to be online and maintain sufficiently good network connectivity that they can collect transactions and have their PoS blocks reach the others without substantial network delays. It follows that any PoS ledger would benefit from reliable server nodes that hold stake and focus on maintenance.

The argument for stake pools

Wealth is typically distributed according to a power-law such as the Pareto distribution, so running reliable nodes executing the PoS protocol may be an option only for a small, wealthy, subset of stakeholders, leaving most without the ability to run such services. This is undesirable; it would be better if everyone had the ability to contribute to ledger maintenance. An approach to rectify this problem is by allowing the creation of stake pools. Specifically, this refers to the ability of stakeholders to combine their stake and form a single entity, the stake pool, which can engage in the PoS protocol using the total stake of its members. A pool will have a manager who will be responsible for running the service that processes transactions. At the same time, the pool manager should not be able to spend the stake that their pool represents, while members who are represented by the pool should be free to change their mind and reallocate their stake if they wish to another pool. Finally, and most importantly, any stakeholder should be able to aspire to become a stake pool manager.

Participating in PoS ledger maintenance incurs costs. Certainly not as high as in the case of a PoW protocol but, nevertheless, still significant. As a result, it is sensible that the community of all stakeholders incentivizes in some way those who support the ledger by setting up servers and processing transactions. This can be achieved by a combination of contributions from those that use the ledger (in the form of transaction fees) and inflation of the circulating supply of coins (by introducing new coins in circulation to be claimed by those engaged in the protocol).

In the case of Bitcoin, we have both the above mechanisms, incentivization and pools. On the one hand, mining is rewarded by transaction fees as well as a block reward that is fixed and diminishes over time following a geometric series. On the other hand, pools can be facilitated by dividing the work required for producing blocks among many participants and using ‘partial’ PoWs (which are PoWs that are of smaller difficulty than the one indicated by the current state of the ledger) as evidence of pool participation.

It is straightforward to apply a similar type of incentivization mechanism in the PoS setting. However, one should ask first whether a Bitcoin-like mechanism (or any mechanism for that matter) would converge to a desirable system configuration. Which brings us to the important question: what are the desirable system configurations? If the only consideration is to minimize transaction processing costs, in a failure-free environment, the economically optimal configuration is a dictatorial one. One of the parties maintains the ledger as a service while all the others participate in the pool created by this party. This is clearly an undesirable outcome because the single pool leader becomes also a single point of failure in the system, which is exactly the type of outcome that a distributed ledger is supposed to avoid. It follows that the coexistence of many pools, in other words decentralization, should be a desirable characteristic of the ledger incentivization mechanism.

Reward-sharing schemes for PoS

So what would a reward-sharing scheme look like in a PoS setting? Rewards should be provided at regular intervals and pool maintenance costs should be retained by the pool manager before distributing the remaining rewards among the members. Given that it is possible to keep track of pool membership in the ledger itself using the staking keys of the participants, reward splits within each pool can be encoded in a smart contract and become part of the ledger maintenance service. First things first, pool managers should be rewarded for their entrepreneurship. A pool creation certificate posted on the ledger will declare a profit margin to be shaved off the pool’s rewards after subtracting operational costs, which should also be declared as part of the pool creation certificate. The cost declaration should be updated frequently to absorb any volatility that the native token of the system has with respect to the currency that denominates the actual costs of the pool manager. At the same time, the pool creation certificate, backed up by one or more staking keys provided by stakeholders, can declare a certain amount of stake that “stands behind” the pool and can be used either as an indication that the pool represents the genuine enterprise of one or more stakeholders or as collateral guaranteeing compliance with correct protocol behavior.

Given the above setup, how do Bitcoin-like mechanisms fare with respect to the decentralization objective? In Bitcoin, assuming everyone follows the protocol, pool rewards are split in proportion to the size of each pool. For example, a mining pool with 20% of the total hashing power is expected to reap 20% of the rewards. This is because rewards are proportional to the number of blocks obtained by the pool and the number of blocks is in turn proportional to the pool’s mining power. Does this lead to a decentralized system? Empirical evidence seems to suggest otherwise: in Bitcoin, mining pools came close (and occasionally even exceeded) the 50% threshold that is the upper boundary for ensuring the resilience of the ledger. A simple argument can validate this empirical observation in the framework of our reward-sharing schemes: if pools are rewarded proportionally to their size and pool members proportionally to their stake in the pool, the rational thing to do would be to centralize to one pool. To see this consider the following. At first, it is reasonable to expect that all players who are sufficiently wealthy to afford creating a pool will do so by setting up or renting server equipment and promoting it with the objective to attract members so that their share of rewards grows. The other stakeholders that are not pool managers will join the pool that maximizes their payoff, which will be the one with the lowest cost and profit margin. Pool competition for gaining these members will compress profit margins to very small values. But even with zero profit margin, all other pools will lose to the pool with the lowest cost. Assuming that there are no ties, this single pool will attract all stakeholders. Finally, other pool managers will realize that they will be better off joining that pool as opposed to maintaining their own because they will receive more for the stake they possess. Eventually, the system will converge to a dictatorial single pool.

Figure 1 shows a graphical representation of this. It comes from one of the numerous simulations our team has conducted in the process of distilling effective reward sharing schemes. In the experiment, a number of stakeholders follow a reactive process where they attempt to maximize their payoff based on the current system configuration. The experiment leads to a centralized single pool, validating our theoretical observations above for Bitcoin-like schemes. From a decentralization perspective, this is a tragedy of the commons: even though the participants value decentralization as an abstract concept, none of them individually wants to bear the burden of it.

Figure 1. Centralisation exhibited by a Bitcoin-like reward-sharing scheme in a simulation with 100 stakeholders. Initially, a high number of pools are created by the stakeholders. Taking turns, stakeholders try to maximize their payoff and change strategy, leading to a convergence point where only a single pool exists.

A better reward sharing scheme

Clearly we have to do better than a dictatorship! A first observation is that if we are to achieve decentralization, linearity between rewards and size should taper off after a certain level. This is because, while linearity is attractive when the pool is small and wants to attract stakeholders, after a certain level it should be diminished if we want to give an opportunity for smaller pools to be more competitive. Thus, we will divide the behavior of the reward-sharing scheme depending on the size of the pool to two stages: a growth stage, when linearity is to be respected, and a stabilization stage when the pool is large enough. The point where the transition happens will be called the saturation point and the pool that has passed this point will be saturated. We can fix rewards to be constant after the saturation point, so that if the saturation point is 1%, two pools, with total stakes of 1% and 1.5%, will receive the same rewards.

To appreciate how the dynamics work from the perspective of a single stakeholder, consider the following example. Suppose there are two pools, A and B managed by Alice and Bob, with operational costs of 25 and 30 coins respectively, each one with a profit margin of 4%. Suppose further that the total rewards to be distributed are 1,000 coins and the saturation point of the reward-sharing mechanism is 20%. At a given point in time, Alice’s pool has 20% of the stake, so it is at the saturation point, while Bob’s pool is at 19%. A prospective pool member, Charlie, holds 1% of the stake and considers which pool to join. Joining Alice’s pool will bring its total stake to 21%, and because it has exceeded the saturation point the reward will be 200 coins (20% of the total rewards). Deducting operational costs will leave 175 coins to be distributed between Alice and the pool members. After removing Alice’s profit margin and considering Charlie’s relative stake in the pool, he will receive 8 coins as a reward. If Charlie joins Bob’s pool, the total rewards will be 200 coins, or 170 coins after removing the operational costs. However, given that Charlie’s stake is 5% (1/20) of the pool, it turns out that he will receive 2% more coins than if he had joined Alice’s pool. So Charlie will join Bob’s pool if he wants to maximize his rewards.

Now, let us see what happens in the case that Charlie is facing the same decision at a hypothetical earlier stage of the whole process when Alice’s pool was already at 20% of the total stake, while Bob’s pool was only at 3%. In this case, Bob has a very small pool and the total rewards available for its members are much less compared with the previous case. As a result, if Charlie did the same calculation for Bob’s pool, his 1% stake would result in a 4% total stake for the pool but, if one does the calculations, he would receive a mere 30% of the rewards that he would have obtained had he joined Alice’s pool. In such a case, the rational decision is to join Alice’s pool despite the fact that his membership will make Alice’s pool exceed the saturation point. Refer to Table 1 below for the exact figures.

Table 1. Charlie who holds 1% of the total stake, is considering joining pools run by Alice, Bob, Brenda and Ben. His reward is calculated in coins for joining each one. The total reward pool is 1,000 and the saturation point is 20%.

Being far-sighted matters

The above appears to be contradictory. To understand what Charlie needs to do we have to appreciate the following fact. The choice of Charlie to join Alice’s pool in the second scenario is only rational in a very near-sighted (aka myopic) sense. In fact, Charlie is better off with Bob’s pool, as is demonstrated by the first scenario, as long as Bob’s pool reaches the saturation point. Thus, if Charlie believes that Bob’s pool will reach the saturation point, the rational choice should be to support it. Other stakeholders will do the same and thus Bob’s pool will rapidly reach the saturation point making everyone that participated in it better off, while also supporting the ideal of decentralization: Alice’s pool instead of constantly growing larger will stop at the saturation point and other pools will be given the ability to grow to the same size. This type of strategic thinking on behalf of the stakeholders is more far-sighted (aka non-myopic) and, as we will see, has the ability to help parties converge to desirable decentralized configurations for the system.

It is worth noting that it is unavoidable that the system in its evolution will reach pivotal moments where it will be crucial for stakeholders to exercise far-sighted thinking, as in the scenario above where Alice’s pool reaches the saturation point while other pools are still quite small. The reason is that due to the particular circumstances of each stake pool manager, the operational costs will be variable across the stakeholder population. As a result, it is to be expected that starting from a point zero where no stake pools exist, the pool with the lowest operational cost will be also the one that will be the first to grow. This is natural since low operational costs leave a higher level of rewards to be split among the pool members. It is to be expected that the system will reach moments like the second scenario above where the most competitive pool (the one of Alice with operational cost 25) has reached saturation point while the second-most competitive (the one of Bob with operational cost 30) is still at a small membership level.

One might be tempted to consider long-term thinking in the setting of a Bitcoin-like reward sharing schemes and believe that it can also help to converge to decentralization. Unfortunately, this is not the case. In a Bitcoin-like scheme, contrary to our reward-sharing scheme with a saturation point, there is no point in the development of Alice’s and Bob’s pools when Bob’s pool will become more attractive in Charlie’s view. Indeed, without a saturation point, Alice’s bigger pool will always offer more rewards to Charlie: this stems from the fact that the operational costs of Alice are smaller and hence leave more rewards for all the stakeholders. This will leave Bob’s pool without any members, and eventually, as discussed above, it will be the rational choice for Bob also to dissolve his pool and join Alice’s, making Alice the system’s dictator.

Going back to our reward-sharing scheme, we have established that non-myopic strategic thinking promotes decentralization; nevertheless, there is an important point still open. At a pivotal moment, when the non-myopic stakeholder Charlie rationally decides to forgo the option to join Alice’s saturated pool, he may have a number of aspiring pools to choose from. For instance, together with Bob’s pool that has operational costs of 30 and profit margin 4%, there could be a pool by Brenda with operational cost of 33 and profit margin 2%, and a pool by Ben with operational cost of 36 and profit margin 1%. The rational choice would be to go with the one that will reach the saturation point; is there a way to tell which one would be the best choice? In our full analysis paper we provide an explicit mechanism that orders the pools according to their desirability and, using the information recorded in the ledger about each stake pool, it can assist stakeholders in making the best possible choice at any given moment. In our example, it is Brenda’s pool that Charlie should join if he wants to maximize his rewards (see Table 1). To aid Cardano users, the pool-sorting mechanism will be built into Daedalus (and other Cardano-compatible wallets) and will provide a visual representation of the best choices available to stakeholders using the information in the ledger regarding pool registrations.

Experimental evaluation

So how does our reward scheme fare with respect to decentralization? In the full analysis paper we prove that there is a class of decentralized system configurations that are “non-myopic Nash equilibria.” An equilibrium strategy here means that stakeholders have a specific way to create pools, set their profit margins and/or delegate to other pools, so that no stakeholder, thinking for the long term, is better off following a different strategy. Moreover, we demonstrate experimentally that reactive play between stakeholders with non-myopic thinking converges to this equilibrium in a small number of iterations, as shown in Figure 2.

Figure 2. Decentralization as exhibited by our reward-sharing scheme in a simulation with 100 stakeholders and 10% saturation point. Pools are gradually created by the stakeholders. Taking turns, the stakeholders attempt to maximise their payoff non-myopically leading to a final convergence point where 10 pools exist, each with an equal share of the total stake. At the final point, no rational stakeholder wishes to change the state of the system.

A characteristic of our approach is that the number of pools is only part of the description of the reward-sharing scheme and thus is in no way enforced by the system on the stakeholders. This means stakeholders are free to experiment with pool creation and delegation of stake without having to conform to any predetermined system architecture. This is in contrast to other approaches taken in PoS systems such as EOS where the number of participants is a hardcoded parameter of the consensus system (specifically, 21 pools). At the same time, our approach allows the whole stakeholder set to to express its will, by freely joining and leaving pools, receiving guaranteed rewards for their participation while witnessing how their actions have a quantifiable impact on the management of the PoS distributed ledger no matter the size of their stake. This is contrast to other approaches taken in PoS systems such as Ethereum 2.0 where ledger maintenance is performed by registered validators on the basis of a collateral deposit without a built-in process of vetting by the stakeholder set.

So what would be a sensible choice for the number of pools that should be favored by the reward scheme for Cardano? Given that decentralization is our main objective, it is sensible to set this parameter to be as high as possible. Our network experiments showed that the system can still operate effectively with as many as 1,000 running pools. Choosing a saturation threshold for our reward-sharing scheme based on this number will make having a stake pool profitable even if the total stake delegated in them is as little as 0.1% of the total circulation of Ada.

Looking ahead – Sybil attacks

Given that decentralization can be achieved by a large number of independent stake pools, it is also important to see whether some decentralized system configurations are more preferable than others. As described so far in this post, our reward-sharing scheme will lead rational stakeholders towards promoting the stake pools that will incur the smallest total cost. Even though this maximizes rewards and minimizes costs, it may not be necessarily the most desirable outcome. The reason is that in the equilibrium point one may see a set of stakeholders promoted as stake pool managers who possess collectively a very small stake themselves. This imbalance, in which a small total stake represents the total stake of the system, can be detrimental in many ways: stake pool managers may be prone to corruption or bribery, or, perhaps even worse, a large stake holder may register many stake pools in the hope of controlling the whole ecosystem, performing in this way a Sybil attack that would hurt decentralization. For this reason, the reward-sharing scheme as presented in our full analysis paper is suitably modified to be sensitive to the stake backing the pool so that this type of behaviour is mitigated. We will delve deeper into this aspect of Cardano reward-sharing in the next blog post.

Artwork, Creative Commons Mike Beeple

Ouroboros Genesis presented at flagship conference

IOHK research on proof of stake appears at CCS in Toronto

18 October 2018 Jane Wild 4 mins read

A third major paper from the Ouroboros line of research was presented at a leading computer security and cryptography event yesterday, a recognition of the contribution the work makes to the field of computer science. The paper, Ouroboros Genesis, was presented by researcher Christian Badertscher at the 25th ACM Conference on Computer and Communications Security, held in Toronto this week. The conference is five days of presentations, workshops and tutorials for hundreds of delegates, who are information security researchers, industry professionals, and developers from around the world. The annual event, organised by the Special Interest Group on Security, Audit and Control (SIGSAC) of the Association for Computing Machinery (ACM), is a forum for delegates to come together for discussions and to explore cutting edge research.

This year CCS sponsors included the US government agency, the National Science Foundation, and major global technology companies such as Baidu, Cisco, Samsung, Google, Facebook. The hardware wallet maker, Ledger, was also present. CCS is the highest rated computer security and cryptography conference according to Google Scholar ratings, meaning that collectively, the papers selected to appear at the conference are more cited by academics than papers for any other conference.

IOHK’s paper appeared in one of the two sessions that were dedicated to blockchain, with a total of six papers on the subject overall. These included a paper on what will happen with blockchains such as Bitcoin as rewards get smaller and the potential problems that stem from that. Scalability was in focus too, with a paper on scaling blockchains through sharding and another on state channel networks.

Ouroboros Genesis Presentation
Since Bitcoin demonstrated the disadvantages of using an energy-intensive proof-of-work protocol to run a public distributed ledger, many researchers have turned to explore proof of stake. The Ouroboros research is an attempt to systematically surmount the challenges that proof of stake poses and describe a secure, efficient and sustainable protocol for blockchains. This effort has been led by Professor Aggelos Kiayias, IOHK Chief Scientist and Chair in Cybersecurity and Privacy at the University of Edinburgh’s School of Informatics. In the space of only a couple of years, the team have made significant advances. Ouroboros was the first peer reviewed, provably secure proof-of-stake protocol, and it is already running in the real world, underpinning Cardano, a top 10 global cryptocurrency. [Ouroboros Genesis](https://www.youtube.com/watch?v=LCeK_4o-NCc "Ouroboros Genesis: A Provably Secure Proof-of-Stake Blockchain Protocol, youtube.com") is the third paper in the Ouroboros family of proof-of-stake protocols, and the third paper from this important line of IOHK research to be heard at a flagship international computer science conference. The first paper, Ouroboros, was presented at [Crypto 2017](/blog/proof-of-stake-protocol-ouroboros-at-crypto-17/ "Proof-of-stake protocol, Ouroboros, at Crypto 17, iohk.io") in California, and the second, Ouroboros Praos, was at [Eurocrypt 2018](/blog/ouroboros-praos-presented-at-leading-cryptography-conference/ "Ouroboros Praos presented at leading cryptography conference, Eurocrypt, iohk.io") in Tel Aviv. Further papers are to come from the research team, including on sharding, a means to provide scalability for Cardano.

Using Ouroboros Genesis, new users joining the blockchain will be able to do so securely based only on an authentic copy of the genesis block, without the need to rely on a checkpoint provided by a trusted party. Though common in proof-of-work protocols like Bitcoin, this feature was previously unavailable in existing proof-of-stake systems. This means that Ouroboros can now match the security guarantees of proof-of-work protocols like Bitcoin in a way that was previously widely believed to be impossible.

Christian Badertscher (left) with Charles Hoskinson (right)

Aggelos said: “Ouroboros Genesis resolves an important open question in the PoS blockchain space, namely how it is possible to securely connect to the system without any information beyond the genesis block. This is a significant step forward that enables a higher degree of decentralization that seemed unattainable for PoS protocols before our work.

“Our security analysis is also in the "universal composition" setting that provides, for the first time in the PoS space, a modular way of building secure applications on top of the ledger.”

Christian said: “It is exciting to present Ouroboros Genesis at a top security conference and very rewarding to see how theoretical research can make a significant impact on practice. Avoiding the need of a trusted checkpoint, and still being secure in a setting with a variable level of participation, has been a challenging problem to solve in the PoS space.”

Published on May 3 this year, the paper’s full title is Ouroboros Genesis: Composable Proof-of-Stake Blockchains with Dynamic Availability. The research team was comprised of Christian Badertscher, Peter Gaži, Aggelos Kiayias, Alexander Russell, and Vassilis Zikas.

An Open Letter to the Cardano Community from IOHK and Emurgo

A joint statement from Charles Hoskinson and Ken Kodama

12 October 2018 IOHK and Emurgo 14 mins read

To the Cardano Community, Cardano is an amazingly diverse and vibrant project that is rightfully being recognised throughout the world. Our community contains tens of thousands of engaged and passionate volunteers, advocates, contributors and fans in countries ranging from Argentina to Zimbabwe. This growth is due to our commitment to innovation, transparency, balance of power and embracing the scientific community. To IOHK and Emurgo, Cardano is so much more than a product we work on. Cardano is a mission to deliver a financial operating system to the three billion people who do not have one.

As with all movements, occasionally issues occur that require careful and rational discussion. When the Cardano movement began in 2015, instead of launching an all-powerful foundation that would raise funds, manage development, encourage adoption and address the concerns of the community, we diligently split the governance of Cardano into three legal entities: IOHK, Emurgo and the Cardano Foundation. This separation of powers was to ensure that the failure of one legal entity, if any, could not jeopardise or destroy the Cardano project.

IOHK and Emurgo

IOHK’s primary responsibility was and continues to be, developing the core collection of protocols that compose Cardano, from academic inception to applying formal methods to verify correct implementation. This task is enormous in scope and has led to the creation of three research centers, many peer reviewed papers, engagement with half a dozen development firms and one of the most active cryptocurrency GitHub repositories.

As a company that accepts its critical role in this effort, IOHK has attempted to be as transparent and focused as possible. That acceptance is why we launched the Cardano Roadmap website, produce many videos on our IOHK YouTube channel, publish a weekly technical report, have dedicated project managers who produce videos on progress, hold special events and have AMA (Ask Me Anything) sessions.

Emurgo has been responsible for building partnerships with developers and instigating projects for the Cardano protocol around the world. Emurgo has grown from a small entity of just a few employees to a multinational effort with an ever-increasing investment portfolio.

Emurgo has been collaborating with IOHK on products such as the Yoroi wallet, improving the developer experience for smart contracts and DApps, and holding discussions on high-value markets to drive adoption, as well as other efforts within its mandate. These collaborations will continue to grow and become even more meaningful as we move into 2019, with Cardano achieving decentralization, multi-asset accounting and full smart contract support.

This acceptance of its role is also why IOHK has retained firms such as Quviq, Tweag and Runtime Verification to help build Cardano, refine processes and speed up development. Our collective development efforts have resulted in three codebases (Scala, Haskell and Rust), some of the first examples of applied formal methods with our new wallet backend and incredibly sophisticated techniques for modeling performance and reliability with deployed distributed systems.

Finally, our protocols are based on scientific inquiry. Such work should be done by scientists who have the requisite domain experience and wisdom. Thus we have directly engaged leaders in their respective fields with years to decades of experience to write our foundational papers. We have also vetted these papers through the peer review process accepted by the computer science community.

Like every other project, IOHK’s efforts aren’t without their flaws and setbacks. The initial release of Cardano wasn’t perfect. There were many issues ranging from some users having difficulty connecting to peers, to exchanges having trouble with the Cardano wallet. These teething problems are expected to be solved with all new codebases. However, the most important observation is that IOHK has never accepted any status quo and continues to work diligently to improve the code, the user’s experience and broaden the utility of Cardano.

Like IOHK, Emurgo has had its own challenges. Navigating 2017 – during a period of utterly irrational valuations, ICO mania, many poorly led ventures as well as continued regulatory uncertainty – was difficult. As with all ventures, staffing a great executive team is also a tremendous task. But as 2018 comes to a close, Emurgo has retained some great talent such as their CTO Nicolas Arqueros, chief investment officer Manmeet Singh, and one of our community’s best educators, Sebastien Guillemot. Emurgo’s collaborations with IOHK have been both meaningful and productive.

Cardano Foundation

The Cardano Foundation was created to promote the Cardano protocol, to grow and inform our community and address the needs of the community. These are broad aims and cross demographics and borders.

Being more specific about the needs of the Cardano community, all cryptocurrency communities need accurate, timely and comprehensive information about events, technology and progress of the ecosystem. All cryptocurrency communities need stable and moderated forums to discuss their ideas, concerns and projects. All cryptocurrency communities need liquidity and thus require access to exchanges.

The Cardano protocol also requires community-led efforts to gradually decentralize the protocol beyond what Bitcoin and Ethereum have achieved. A core focus outlined in the Why Cardano white paper is the desire to establish a treasury and a blockchain-based voting system for ratifying Cardano improvement proposals.

This effort cannot just rely on technological and scientific innovation. Rather, it requires a well-organized and informed community that is representative of the users of Cardano and is geographically diverse. Among other things, it is the Foundation’s responsibility to invest in the creation of this community.

Lack of performance by the Cardano Foundation

For more than two years there has been great frustration in the Cardano community and ecosystem. This has been caused by a lack of activity and progress on the assigned responsibilities of the Cardano Foundation and its council. Furthermore, there has been no clear indication of improvement, despite many fruitless attempts and approaches to the Foundation’s chairman and council to change this.

Dissatisfaction and frustrations about the Foundation’s performance stem primarily from:

  1. A lack of strategic vision from the council. There are no KPIs or public strategy documents outlining how the Foundation will accomplish the above goals or any discernible goal.

  2. The absence of a clear public plan for how the Foundation will spend its funds to benefit the community.

  3. The lack of transparency in the Foundation’s operations (for example publication of its board minutes and director remuneration).

  4. Material misrepresentations and wrongful statements by the Foundation’s council including a claim that it owned the trademark in Cardano. The council has even tried to assume the power to decide who speaks for the protocol, what should be deployed on the protocol and how the press should represent relationships between Emurgo, IOHK, the Foundation and third party projects.

    Having identified the legal dubiousness and profound consequences of the Foundation’s claims in respect of trademark ownership, IOHK ceased collaboration with the Foundation until it published a fair use policy for the trademark. This process took weeks.

    The unpredictable conduct and lack of action by the board of the Foundation has been puzzling. For example, when IOHK went to Ethiopia to sign an MOU with the Ministry of Science and Technology, the Foundation originally agreed to attend and jointly sign. Unexpectedly, the Foundation decided to back out the week before and claimed in an email to IOHK’s communications director that it – without any basis or underlying agreement – was to be the single guardian of the Cardano brand and protocols.

    Read the email from the Foundation to IOHK here.

  5. Lack of financial transparency. As of October, despite several requests the Foundation has still refused to publish the addresses holding its allocation of Ada. Neither has the Foundation published audited financial statements. And, the Foundation has not provided any information on remuneration of directors and officers.

  6. The lack of a complete and diverse Foundation council. At its incorporation (September 2016) the council consisted of 4 members, with Michael Parsons as chairman. Ten days after his appointment, a council member (Mr Parsons’ stepson, Bruce Milligan) resigned. Instead, Mr Milligan became the general manager of the Foundation. His vacancy on the council, however, was never filled. Ten months after the Foundation’s incorporation, the third council member resigned, thus reducing the council from the 4 members as intended by its founders to only 2 (Mr Parsons and a professional Swiss council representative).

    The vacancies have not been filled by the remaining council members. As a consequence, since 14 July, 2017, the Foundation has, in effect, been controlled by Mr Parsons. He has been acting as the Foundation’s de facto sole decision-maker in respect of the day-to-day business of the Foundation and ruling its staff like a monarch. For more than 15 months, there appear to have been no reasonable attempts to fill the 2 council vacancies. There appears to be no oversight and there appear to be no checks and balances beyond those required by Swiss law.

    A sound council board in the opinion of the ecosystem should consist of several active and competent and independent members. These should be domain experts from the cryptocurrency community who fairly represent the holders of Ada and users of the Cardano protocol. They should be committed to maintaining reasonable checks and balances. Although not imposed under Swiss law, the council appointment process should ideally be open to the community and include their feedback and suggestions.

    Despite over 90 percent of the original Ada voucher purchasers residing in Japan, the Cardano Foundation has yet to appoint anyone from Japan into a position of power. Also, the Foundation has yet to engage a lobbyist to assist with getting Ada listed on Japanese exchanges. And, the Foundation has no significant presence or personnel from Japan or even Asia.

  7. Lack of any concept of how the millions of dollars committed to the Foundation will benefit the Cardano community. Instead of working on meaningful projects such as law and policy research for ICO and STO standards for assets that will be issued on Cardano, thereby offering an alternative to Ethereum’s tokens, or studying ways to deploy Cardano’s improvement proposal process, the Foundation’s council has decided to invest its provided research capital in the Distributed Futures program.

    No explicit case has been made as to how the Distributed Futures research will benefit the Cardano protocol or the ecosystem. No funds have been committed to commercialize the research. No apparent effort has been made by council members of the Foundation to annotate the Distributed Futures reports with specifics on how the findings will be applied to our community.

    Furthermore, members of the ecosystem worry about potential conflicts of interest because both Robert McDowall, an adviser and contributor to Distributed Futures research, and Michael Mainelli, leader of Distributed Futures, have pre-existing relationships with Mr Parsons. Indeed, we are not aware of any process within the Cardano Foundation to analyze potential conflicts of interest and require recusal where necessary.

  8. Absence/unawareness of any meaningful internal governance system at the Cardano Foundation. In our many interactions with Foundation staff, it has never become clear how decisions are made and reviewed. It has also never been clear how the chain of command operates beyond Chairman Parsons.

Our call for action

Emurgo and IOHK are calling for the Foundation council: to voluntarily subject itself to the Swiss authorities; for a complete audit of all of the Foundation's financial transactions and major decisions to be conducted; and for the results to be released to the general public. This audit should include direct and indirect remuneration paid (in the light of actual and agreed performance or services delivered for the benefit of the Foundation) to Mr Parsons; his stepson Bruce Milligan who acted as a general manager; and his wife, Julie Milligan, who acted as an assistant to Mr Parsons.

The Cardano Foundation is an independent legal entity governed by its council, thus the Cardano community, IOHK and Emurgo cannot force the chairman to resign. Nevertheless, we can only hope that reason will persuade Mr Parsons to voluntarily step down. This would allow for regulatory oversight and avoid the Foundation continuing to be an ineffective entity.

Offer of IOHK & Emurgo

The Foundation and its council have not been able to execute their purpose in promoting and supporting the Cardano ecosystem. So, to provide the Cardano ecosystem with the support and services it requires and deserves, in the Foundation’s stead, IOHK and Emurgo are committed to the following actions until at least 2020:

  1. IOHK and Emurgo will begin hiring dedicated community managers for the Cardano ecosystem and assign them to growing and informing our community through meetup groups, events, educational efforts and other metrics that can be tracked.

  2. IOHK is willing to hire, subject to reasonable due diligence and negotiations, Cardano Foundation personnel directly engaged in community management should they desire to leave the Foundation.

  3. IOHK will work with Emurgo to start efforts in Japan to improve exchange access and community understanding of Cardano.

  4. IOHK and Emurgo will scale up its educational and marketing efforts to include more content about the Cardano protocols, developer resources and USPs of our ecosystem.

  5. IOHK has hired an open source community manager to draft the Cardano improvement proposal process and begin its rollout.

  6. IOHK has expanded its research scope to include the areas originally forseen for the Cardano Foundation.

  7. IOHK will start a research agenda to design a decentralized Foundation built as a DAO to be deployed on the Cardano computation layer. We will announce a dedicated research center at a later date.

Final thoughts

First, IOHK and Emurgo’s funding for the Cardano project is fully secured, independent, and not connected to the Cardano Foundation. The Foundation is not in a position to mandate or compel changes in the operations of the Cardano platform, IOHK, or Emurgo.

Second, the original intention of separating powers within the Cardano ecosystem was to ensure that the failure of one entity would not destroy the project. This resilience has allowed us to thrive, despite the Foundation’s lack of progress and vision.

Third, the real strength of Cardano stems from its exceptional community, which continues to grow and impress us. The Foundation’s role is similar to the Bitcoin Foundation’s, in that its purpose is to add value to the community. Like the Bitcoin Foundation for Bitcoin, the Cardano Foundation is not necessary for Cardano to succeed as a project.

And last, but not least, for IOHK, Cardano is more than a product. Cardano is a mission to deliver a financial operating system to the three billion people who need a new one. Our personnel have been to more than 50 countries over the past three years representing Cardano. We will continue to do so over the coming years because we see the power of this technology and the people it can help.

As the CEOs of IOHK and Emurgo, we are deeply disappointed that we have not been able to activate and increase the performance of the Foundation. We have not been able to resolve the above outstanding matters in another way. We are also deeply disappointed that our community has been repeatedly let down by the Foundation, yet we are determined to ensure that the community will be served in the manner it deserves to be served.

Regardless of the above, we believe our best days are ahead of us. We believe Cardano will become the best technology to deliver financial infrastructure to the billions who lack it.

Charles Hoskinson,
Chief Executive Officer,
Input Output HK Ltd.
Ken Kodama,
Chief Executive Officer,
Emurgo

This article has been corrected to reflect the fact that Bruce Milligan is Michael Parson’s stepson, rather than son-in-law, as previously stated.

Artwork, Creative Commons Mike Beeple

Functional correctness with the Haskell masters

Training to build quality code on scientific excellence

26 September 2018 Lars Brünjes 6 mins read

Functional correctness with the Haskell masters

At IOHK, we are proud of our scientific approach and close collaboration with academia. We publish in peer reviewed scientific journals and present our results at acclaimed international conferences to ensure that our protocols and algorithms are built on rock-solid foundations. Our software must reflect this scientific excellence and quality, which means that we need a process to go from scientific results to actual code written in the Haskell programming language. We therefore decided to run internal training on “functional correctness”, so that the quality of our theoretical foundations can translate into equal quality for our code. We ran the first course over four days in Regensburg, Germany, two weeks ago. This training is aimed at everybody writing Haskell at IOHK, so we decided to run four sessions, roughly based on geography – there are IOHK engineers in 16 countries. We plan to do a second session in Regensburg in November and then two more early next year in the US. The lecturers were Andres Löh, co-founder of the Well-Typed consultancy, and John Hughes, the founder of QuviQ, who are both prominent in the Haskell world.

John is one of the creators of Haskell and the co-inventor of QuickCheck, the Haskell testing tool. Most mainstream software companies (if they do testing at all, which, sadly, is not always the case), use unit tests. For this, developers write down a number of tests by hand, cases that they deem typical or relevant or interesting, and then use a unit test framework to run the tests and report whether they yield the expected results. QuickCheck is different. Instead of specifying a handful of tests, developers using QuickCheck state the properties that their code should have. QuickCheck then generates many random test cases and checks the property for each of these. If QuickCheck finds that a property is violated, it first tries to simplify the test, then reports the simplest failing case back to the user.

Learning in Regensburg
Haskell students in class

As a simple example, let’s say you wrote a program to sort a list of names. Using unit tests, you would check the program against a few handcrafted examples of lists of names (something like "Tom", "Dick", "Harry" and "Dora", "Caesar", "Berta", "Anton" ). With QuickCheck, on the other hand, you would sit down and carefully think about properties your program should have In the example of sorting lists of names, what properties would you expect? Well, after running the program, you should get a list that is sorted alphabetically. Oh, and that list should contain all the names you entered. And yes, it should only contain those names you entered. You can write down these properties as Haskell programs, then hand them over to QuickCheck. The tool checks your properties against as many randomly generated lists of names as you wish (usually hundreds or thousands) and identifies any violations.

In practice, QuickCheck often manages to find problems that are overlooked by less rigorous methods, because their authors tend to overlook obscure cases and complicated scenarios. In our example, they may, for example, forget to test an empty list of names. Or there may be a bug in the program that only occurs for long lists of names, and their unit tests only check short lists. John had many ‘war stories’ of this happening in real life with real customers, where bugs were only revealed after a series of complex interleaved operations that no human unit test writer would have imagined.

Every Haskell developer has heard of QuickCheck and understands the basic ideas, but in complex real-world programs like Cardano, it is sometimes not so easy to use the tool properly. It was therefore great to have the intricacies and finer points explained by John himself, who has been using QuickCheck for 20 years and has worked with many industries, including web services (Riak, Dropbox and LevelDB), chat servers (Ejabberd), online purchasing (Dets), automotive (Autosar specification), and telecommunications (MediaProxy, Ericsson and Motorola). He helps find bugs and guarantee correctness every day. Given John’s experience, the training participants were able to spend about half of their time learning the finer points of QuickCheck from the master himself. It was tremendous fun enjoying John’s obvious enthusiasm for, and deep knowledge of, the subject. The rest of the session was dedicated to understanding the link between formal specifications, written in a mathematical style, and Haskell implementations.

Exploring Regensburg
IOHK in Regensburg

At IOHK, we work very hard on writing correct code. For example, we specify program behavior and properties using rigorous mathematics. In the end, of course, we can’t deploy mathematics to a computer. Instead, our developers have to take the specification, translate the mathematics into Haskell and produce executable, efficient code. This process is easier for Haskell, because it is firmly rooted in mathematical principles, than for most languages, but it is still a conceptual leap. The specification talks about mathematical objects like sets and relations, which have to be translated into data types and functions as faithfully as possible. Nobody wins if your beautiful mathematics is ‘lost in translation’ and you end up with bug-ridden code. For example, when mathematicians talk about integers (..., −2, −1, 0, 1, 2,...) or real numbers (such as π, and √2), how do you express this in Haskell? There are data types like Int or Double that seem related, but they are not the same as the mathematical concepts they were inspired by. For example, a computer Int can overflow, and a Double can have rounding errors. It is important to understand such limitations when translating from mathematics to code. This is where the mathematician and renowned Haskell expert Andres Löh came in. He taught the participants how to read mathematical notation, how mathematical concepts relate to Haskell and how to translate from the one to the other.

For example, Andres presented the first pages of our formal blockchain specification and talked the participants through understanding and implementing this piece of mathematics as simple (and correct!) Haskell code, which led to interesting questions and lively discussions: How do you represent hashing and other cryptographic primitives? What level of detail do you need? Is it more important to stay as faithful to the mathematics as possible or to write efficient code? When should you sacrifice mathematical precision for simplicity?

In addition to their great lectures, John and Andres also provided challenging practical exercises, where participants could immediately apply their newly-gained knowledge about testing and specifications. Finally, there was plenty of opportunity for discussions, questions and socializing. Regensburg is a beautiful town, founded by the Romans two thousand years ago and a Unesco World Heritage Site. The city offered participants a perfect setting to relax after the training, continuing their discussions while exploring the medieval architecture or sitting down for some excellent Bavarian food and beer.

Artwork, Creative Commons Mike Beeple