The knowledge base for Cardano is growing: Cardano Stack Exchange graduates from Beta version

A fresh Q&A site will facilitate knowledge-sharing for all Cardano related topics

7 February 2022 Ignacio Calderon de la Barca 4 mins read

The knowledge base for Cardano is growing: Cardano Stack Exchange graduates from Beta version

Cardano Stack Exchange (CSE), a community-driven knowledge base for Cardano, is now recognized by Stack Exchange as a mature learning community, putting it in a class with some of the biggest knowledge curation sites on the internet.

The demand for in-depth technical knowledge about Cardano has consistently increased as more people invest their time in building on Cardano. Such demand couldn’t (and shouldn’t) be met by any single entity, and that led to the community-driven approach of CSE.

Plutus developers, researchers, stake pool operators, Cardano project team members, and founding entities – experts of all stripes from around the ecosystem – have come together to meet this demand. The fact that CSE has graduated from its trial period – dropping the ‘Beta’ label – confirms that it has reached a critical mass of useful, decentralized knowledge.

From ‘Area 51’ to full site: the Stack Exchange journey

The Cardano community anticipated the value of having a Stack Exchange site early on. It has been 10 months since a group of community visionaries launched CSE in the Stack Exchange ‘Area 51’ site, an initiative championed by community member raghu.

Since then, a diverse group of community members put their passion and knowledge to work curating information and documenting solutions for the Cardano ecosystem.

The company behind Stack Exchange facilitates and referees the launch of new communities, and the process is far from easy. For a Stack Exchange initiative to fully launch, it goes through six steps: Discussion, Proposal, Community Commitment, Private Beta, Public Beta, and Graduation.

Conquering them all is a major accomplishment. The ongoing success of this project would not have been possible without the help of many contributors; we make a special acknowledgement of the work of CSE moderators Marek Mahut, Matthias Sieber & Glenn Rieger; IOG team members Lars Brünjes & Samuel Leathers; and the effort of top users like eddex, Mitchell Turner, zhekson, nalyd88, gorgeos, & Andy Jazz. Additionally, we thank Hassan Khalil for his work on the Beta site analytics.

Refer to the image below, top users, their reputation and most answered topics are included.

Future vision

Stack Exchange is a model for Q&A-focused knowledge curation, as well as being a federation of learning communities empowered through merit-based editorial powers and moderator elections. The importance of this platform becomes evident (especially in the context of open source projects) by noting the example of its most iconic representative: Stack Overflow. Stack Overflow has long been a key community hub for developers, paving the way for the success and adoption of all the most popular programming languages.

The winning strategy in tech is to leverage a self-governing, self-sustaining community.

So, what does this mean for CSE’s journey? We have just ticked off a major milestone; however, this is only the beginning of a long term vision for the platform. As hinted above, CSE has the potential to enrich Plutus and Marlowe development, similar to the way that Stack Overflow enriched Python, Javascript, and C.

Get involved

This is already the place where developers meet and share knowledge. The next step is to increase the quality of engagement and elevate the creative range of the user base for our ecosystem’s native programming languages, our protocol, and the full Cardano stack.

To the Technical Community: expect Discord Stages targeted to CSE, here we will highlight top CSE questions and answers and discuss interesting concepts of many topics dealing with building in Cardano.

If you are a developer or Cardano enthusiast, get involved. When you do, keep in mind to maintain best practices as you vote, comment, ask and help others. If you are new to the platform, here are the highlights of what you can achieve:

  • Gain reputation by asking and answering questions.
  • Collect badges to guide your upleveling in the platform.

  • Get specialized information, by filtering questions under tags (e.g., plutus, plutustx, cardano-node, etc…).

What does a strong question or answer look like? We’ve pulled an example for you from CSE with key points noted.

  • Cites important sources, and quotes the most relevant passages.
  • Includes formatting (e.g., lists, bold, or italics) to emphasize key information.
  • Broken into paragraphs for understanding.
  • A comments function for improving the quality of the post.

Moving from beta is not only an important milestone. It's also a good reminder of the power of community action to add exponential value and the ‘compounding’ of knowledge that comes when we work together. We have all been part of this journey, and now we need to keep up the good work.

I’d like to thank Matthew Capps for his input and contribution in preparing this blog post.

Implementing Hydra Heads: the first step towards the full Hydra vision

The Hydra Head, the first in a suite of protocols, is an important element of Cardano’s scaling journey. Let’s see how it fits into the bigger picture. And maybe bust some myths

3 February 2022 Matthias Benkort 12 mins read

 Implementing Hydra Heads: the first step towards the full Hydra vision

We’ve done the science and the theory. We have laid the foundations for a scalable, versatile, and high-throughput blockchain. Now it’s time for steady growth and system enhancements. With the goal of creating an optimized ecosystem to support and foster decentralized applications (DApps) development, Cardano is in the foothills of the Basho phase. With smart contracts already in place, Basho is all about scaling and network optimization. The Hydra protocol family is a key component for this.

We have talked about Hydra before. Hydra is an ensemble of layer 2 solutions designed to address network security and scalability capabilities. Originally conceived within the work of the Ouroboros research team, it has in fact forged an independent path since the original paper publication. Hydra offers increased throughput, minimized latency, and cost-efficient solutions without substantial storage requirements. The Hydra Head protocol was already shaping up back in 2020 and since then our thinking has developed – particularly throughout this early implementation and proof of concept stage. Building on that initial idea, the Hydra Head protocol matured into a proof of concept, and has continued to do so as we have headed toward a more defined implementation for the testnet MVP.

We have seen plenty of excitement (great!) along with misconceptions and misunderstandings (not so great). Most of these have arisen from the idea statement, rather than the actual protocol implementation and some of our earlier blogs have perhaps contributed to these misunderstandings. But the Hydra Head protocol isn’t solely about SPO implementation as much as the theoretical ‘1 million TPS’ – which needs to be caveated and better explained.

In this article, we – the Hydra engineering team – outline our current progress, our approach, and our near and long-term roadmap. We’ll demystify some misconceptions, clarify the benefits and reflect on development challenges.

Hydra Head in a nutshell

Let’s first re-introduce Hydra Heads, which involve not only a robust networking layer between peers and an integrated Cardano ledger, but also, several on-chain scripts (smart contracts) that drive the lifecycle of a Hydra Head.

A Hydra Head is a provably secure isomorphic state channel. Simply put, it is an off-chain mini-ledger between a restricted set of participants, which works similarly (albeit significantly quicker) to the on-chain main ledger.

The first thing to understand is that a channel is a communication path between two or more peers. To be part of a Head means being one of those peers. Channels form isolated networks that can evolve in parallel to the main network. On these alternative networks, participants follow a different, simpler, consensus algorithm: everyone needs to agree on all transactions flowing through. A consequence of this is that, as a participant, I cannot lose money I haven't explicitly agreed to lose. Why? Because any valid transaction requires my explicit approval.

When forming a Head, participants may commit funds to it. This means moving funds on-chain to a script address that locks them under specific rules. The script guarantees safe execution of the protocol on-chain, and in particular, that participants cannot cheat one another. At any time, however, any participant may decide to quit the Head by closing it. In this case, all participants walk away with the latest state they had agreed to off-chain, on their parallel network.

Think of Heads as ‘private poker tables’ where participants bring their own chips to play the game. Participants can play for as long as they want. If someone doesn't play, then the game doesn't progress. Yet, participants are still free to walk away with their chips. If they do so, the game ends with the current wealth distribution.

Figure 1. Hydra Head (simplified) life cycle

The dealer at the table (the on-chain script) ensures that people play by the rules and don't cheat. In the end, there are as many chips out as there were chips in, but they may have been redistributed during the course of the game. While the final result is known outside of the table, the history of all actions that happened during the game is only known to the participants.

This protocol is one of a whole suite of protocols that we usually refer to as ‘Hydra’. The current engineering effort is focused on implementing the Hydra Head protocol as published in Hydra: Fast Isomorphic State-Channels by Chakravarty et al.

Around the end of 2021, Maxim Jourenko, Mario Larangeira, and Keisuke Tanaka published an iteration on top of Hydra Head called Interhead Hydra: Two Heads are Better than One. This iteration defines a method for interconnecting two Heads together enabling, in the long run, the creation of a network of interconnected Hydra Heads. Previously, there were mentions of other protocols like the ‘Hydra Tail’. However, those are still under research, along with new ideas coming from the recent work on the Hydra Head protocol.

Hydra misconceptions

Recently we have seen a lot of commentary positioning Hydra as the ‘ultimate’ solution for Cardano scalability. For sure, Hydra Heads make for a strong foundation to build a scalability layer for Cardano. They are an essential building block that leverages the power of the Extended Unspent Transaction Output (EUTXO) model to enable more complex solutions on top. They are a critical element of the scalability journey, but they are not the final destination.

Scalability isn’t about a million TPS

Before talking about scalability metrics, let’s clarify a few things about transactions per second (TPS). Amongst all those available, TPS is probably the least meaningful metric to consider as a means of comparison. Transactions come in different shapes and sizes. While this is true for Cardano, it’s even more essential when comparing two drastically different systems.

Think about a highway and vehicles. One can look at how many ‘Vehicles Per Second’ (VPS) the highway can handle between two points. Yet, if there’s no common definition of what a vehicle is, then comparing 10 VPS to 100 VPS is seemingly meaningless. If the 10 vehicles in the example refer to massive cargo trucks, does it make sense to compare them to 100 scooters in terms of their delivery capabilities? The same applies to transactions. A transaction carrying hundreds of native assets and outputs is certainly not the same as a single ada payment between two actors.

Using TPS as a metric within the same context (for example, to compare two versions of the Cardano node) is meaningful. Using it as a means of comparison between blockchains isn’t.

With that in mind, we suggest looking not only at throughput, but also at finality and concurrency as important metrics to consider and discuss scalability:

  • throughput: the volume of data processed by a system in a given amount of time
  • finality: the time it takes for the result of some action to become immutable and true for everyone in the system
  • concurrency: the amount of work that can be done by different actors without blocking each other

Hydra Heads excel in achieving near-instant finality within a Head. The process of setting up and closing a Head can take a few blocks, but once established, transactions can flow rapidly across collaborative participants. Since Hydra Heads also use the EUTXO model, they can process non-conflicting transactions concurrently, which – coupled with good networking – allows for optimal use of the available resources. The first simulations of the Hydra Head protocol back in 2020 suggested a very promising ‘1000 TPS’. We are now in the process of benchmarking the real implementation in terms of throughput and finality.

One caveat: a Hydra Head is a very local construct within a small group of participants. These groups will initially be independent and thus, looking at the sum of their individual metrics as a whole is misleading. Since groups are independent and can be independently created at will, it is easy to reach any figure by just adding them up: ten, a thousand, one million, one billion, and so on.

Consequently, while the first version of the Hydra Head protocol will allow for small groups of participants to scale up their traffic at low cost, it won’t immediately offer a solution for global consumer-to-consumer (micro) payments or NFT sales. Why? Because the consensus inside a Head requires every participant to react to every transaction. And a single head doesn't scale infinitely with the number of participants, at least not without some additional engineering efforts. For example, the interconnection of Hydra Heads paves the way for larger networks of participants, effectively turning local Heads into a global network. We are exploring several other ideas to extend the Hydra Head protocol to broaden the set of use cases it can cover. We will talk more about that in the next sections and in future updates.

Use cases and the role of SPOs

So when are Heads useful? Hydra Heads shine when a small group of participants need to process many quick interactions. Imagine, for example, a pay-per-use API service, a bank-to-bank private network, or a fast-paced auction between a seller and a small group of bidders. The use cases are plenty and take various forms. Some of them may be long-running Heads going for months, whereas others may be much shorter and only last a few hours.

Our initial Hydra research in 2020 suggested stake pool operators (SPOs) as likely candidates for running Hydra Heads. However, as the Hydra Head protocol has been researched and built as a proof of concept, we can firmly state that it is a misunderstanding to say that only SPOs should run a Hydra Head to ensure ledger scalability. In fact, SPOs have no intrinsic interest in opening Heads between each other without a reason to transact (tipping or trading NFTs, for example). In a way, SPOs are like any other actor when it comes to the Hydra Head protocol. They can be a participant and open up Heads with other peers, but so can anyone interested.

Admittedly, SPOs are good in operating infrastructure and can be some of the first users running instances of the Hydra Head protocol. Still, this only allows participating SPOs to transact with one another, which limits use cases for end users. Only advanced layer 2 system designs like the Interhead Hydra protocol require intermediaries to run infrastructure to the benefit of end users. In fact, we anticipate that one likely setup for Hydra Heads will be providing users managed Hydra Heads as a service (HaaS). We can achieve this without giving up custody of funds by running the infrastructure on the behalf of end users, who generally have neither the interest nor the technical skills to maintain such infrastructure.

This is very similar to the current operational model of light wallets and light wallet providers that are much more likely to be running Hydra Heads in the long run. Imagine a network composed of the top light wallet providers within the Cardano ecosystem. Such providers can then facilitate instant and cheap payments between their users while ensuring overall trust.

We also envision that services for developers and DApp providers will be likely candidates for running Hydra Heads. Indeed, DApp developers require access to on-chain information. For that, developers may rely on online services that provide adequate interfaces and typically charge monthly usage fees. Hydra Heads can improve this process enabling a more decentralized business model with pay-per-use API calls between service providers and DApp developers.

The roadmap

As a group of protocols that will be delivered over time, and will involve more elaborated layer 2 system designs on top of the Hydra Head protocol, it is crucial that we engage frequently with developers of the Cardano ecosystem. This is not about a ‘big bang’ release but rather an iterative release cycle. We need to understand developer challenges, make sure to meet their needs, and ultimately ensure we are building something useful. This is why we are developing Hydra Head as an open-source GitHub project, starting with an early proof of concept last year. Aiming for a regular and frequent release cadence, we released our initial developer preview in September (0.1.0) followed by a second iteration (0.2.0) before Christmas. The next increment (0.3.0) is coming up in February. We follow semantic versioning and each of those pre-releases (0.x.0) adds features that will be available to our partners and early adopters to test out on private and public Cardano testnet(s).

We’re delighted to announce that our roadmap is now also available on Github! As a means to engage with our community of developers and to be transparent about the course of our development efforts, you will find feature issues, milestones, and projects boards available on the Hydra Head repository.

While our focus is creating meaningful and feature-packed releases as we journey along testnet and later mainnet maturity with version 1.0.0, the roadmap also includes tentative dates. These forecasts stem from both the work accomplished so far and our estimates of the work remaining ahead. We’ll reflect on the content and the dates regularly in an agile manner to keep the roadmap as accurate as possible.

Community feedback is essential

We will measure our success by how much traffic will be running in Hydra Heads in comparison to the Cardano mainnet. This means that we can’t reach our goal without the community, and Hydra can only be successful if it is useful to current and future Cardano users.

Depending on your time, skills, and expertise, we welcome you to engage with us to share questions, feedback, or contribute to the development effort. This is a stellar opportunity to build a whole ecosystem of layer 2 solutions for Cardano together. The Hydra Head protocol will be the first building block of many advanced solutions to come. At IOG, we have already started working on some of them, but some will inevitably (and fortunately!) be built by the members of our community, which we look forward to supporting.

We’ll talk about Hydra Heads in more detail during February’s mid month development update. Subscribe to our Youtube channel and come join us!

I’d like to thank Sebastian Nagel, Olga Hryniuk, Mark Irwin, and Tim Harrison for their input and support in preparing this blog post.

Introducing pipelining: Cardano's consensus layer scaling solution

Pipelining is one of the key scaling improvements to be deployed in 2022. Here’s how it works and why it matters

1 February 2022 John Woods 4 mins read

Introducing pipelining: Cardano's consensus layer scaling solution

You’d be forgiven for thinking that pipelining sounds like a remodelling procedure a plumber might employ. In a way, this isn’t too far from the truth. Pipelining is, effectively, an evolution in Cardano’s ‘plumbing’. It is a key element in our scaling plan this year, one in the series of published steps covering our methodical approach to flex Cardano’s capacity as the ecosystem grows.

Scaling and throughput are crucial considerations for any blockchain, if growth and competitiveness are to be maintained. As Cardano enters the Basho phase of development, we're laser-focused on ensuring that Cardano scales to meet the growing needs of the ecosystem. In other words, we need to ensure that the underlying protocol – Ouroboros Praos – operates fast enough for the plethora of decentralized applications now deploying or lining up to launch on Cardano.

Cardano will continue to be steadily optimized in a series of measured steps, carefully & methodically scaling #Cardano for future growth as demand increases. The changes introduced by the release of node 1.33.0 in early January gave us additional headroom to modify some network parameters, including block size and memory units. Adjustments here have a direct bearing on how Cardano handles network traffic in volume and we continue to monitor network performance closely.

Continuing close observation of real world network performance and - importantly - the cumulative impact of parameter changes will be key throughout this process. Following each update, we carefully monitor and assess across at least one epoch (5 days) before continuing with further adjustments. As much as extensive research and engineering work has gone into designing and deploying the system, a decentralized network architecture needs to be scaled based on real world user behaviours and usage.

Introducing pipelining

Pipelining – or more precisely, diffusion pipelining – is an improvement to the consensus layer that facilitates faster block propagation. It enables even greater gains in headroom, which will enable further increases to Cardano's performance and competitiveness.

To understand how this technique achieves its intended goal, let's recap how blocks propagate at present.

Currently, a block goes through six steps as it moves across the chain:

  1. Block header transmission
  2. Block header validation
  3. Block body request and transmission
  4. Block body validation and local chain extension
  5. Block header transmission to downstream nodes
  6. Block body transmission to downstream nodes

A block’s journey is a very serialized one. All steps happen in the same sequence every time, at every node. Considering the volume of nodes and the ever-growing number of blocks, block transmission takes a considerable amount of time.

Diffusion pipelining overlays some of those steps on top of each other so they happen concurrently. This saves time and increases throughput.

The time saving afforded by this technique will lead to even more headroom to further scale Cardano, including changes to:

  • Block size - the bigger the block, the more transactions and scripts it can carry
  • Plutus memory limits - the amount of memory available for a Plutus script to run
  • Plutus CPU limits - more computational resources can be allocated for a script to run more efficiently

Implementing pipelining

One of the design principles behind diffusion pipelining was to achieve faster block propagation while avoiding ‘destructive’ changes to the chain. We did not want to remove any of the protocols, primitives, or interactions already happening in Cardano, because nodes rely on these established mechanisms. We wanted full backwards compatibility, so instead of changing the way things currently work, we're adding a new mini-protocol whose job is to pre-notify subscribed entities when a new desirable block is seen, prior to full validation.

The key change introduced by pipelining is the ability to pre-notify peers and give them a block before it is validated, which enables the downstream peer to pre-fetch the new block body. This saves a lot of time because we dramatically reduce the time it takes to validate a block across the multiple hops.

In conclusion

Pipelining is just one of the pillars supporting Cardano's scaling this year. Combined, all these changes will lead Cardano to a position where it is faster than its competitors, and a highly competitive platform for decentralized finance (DeFi) this year.

Key takeaways

Fernando Sanchez contributed to this article.

Simple property-based tests for Plutus validators

How to write off-chain code with the 'cooked-validators' library and get property-based tests for free

27 January 2022 Victor Cacciari Miraldo 8 mins read

Simple property-based tests for Plutus validators

We recently heard from Victor Miraldo, who leads the smart contract verification and auditing team at Tweag, about the importance of verification for security reasons in the world of decentralized finance (DeFi). Victor is a Haskell and formal methods engineer committed to ensuring the safety and correctness of decentralized apps (DApps) through tools and processes. In this blog post he outlines how writing and deploying a DApp is simply not enough, and why every developer should thoroughly test all on-chain code and Plutus scripts against a range of bad actors. For that, he introduces a library of ready-made tools for interacting with Plutus validator scripts – called cooked-validators, developed at Tweag. This library helps implement the innermost layer of off-chain code, which is responsible for generating and submitting transactions. By using this library you can get property-based tests at the transaction level for free.

Let's hear what Victor had to say about using their library.

Introduction

Transaction-level tests enable us to submit arbitrary transactions to the validator scripts and to assess their behavior. This process differs from testing a whole smart contract using the defined endpoints as part of the off-chain code of the contract. After all, that off-chain code was designed to seamlessly cooperate with the on-chain code and will have its own intrinsic security and safety checks. This method works for normal operations, but in a testing setup, it often shields on-chain validator scripts from ill-formed or even malicious inputs. Therefore, for transaction-level testing, we want to circumvent the sanitising off-chain code and hammer on-chain scripts with all the same might that an attacker’s hand-crafted off-chain infrastructure might bring. As an analogy with web services, you often want to test your server by sending it arbitrary requests, in addition to those requests that are permitted by the client's user interface.

The cooked-validators library enables you to write the off-chain code responsible for generating and submitting transactions and use the same code for executing and testing your contract, at the transaction-level. This makes it much easier to write tests for the on-chain that can detect whether a number of bad things can or cannot happen.

About the 'cooked-validators' library

Building your contracts with cooked-validators isn't very different from what you are already used to with the Contract monad. Say you followed the tutorial on the Split contract up to and including the ‘Defining the validator script’ section. At the end, you have a splitValidator function that executes the on-chain part of that contract. If you did not follow the tutorial, the splitValidator contract locks a certain amount of funds that can only be unlocked by being split funds between two previously specified parties.

Now, to interact with that contract itself, we need to write the off-chain code, which generates and sends the necessary transactions to the blockchain. Instead of doing that directly in the Contract monad, we'll rely on the cooked-validators library. The lockFunds transaction can be written as follows:

lockFunds :: (MonadBlockChain m) => SplitData -> m ()
lockFunds s@SplitData{amount} = void $ validateTxConstr
  [PaysScript splitValidator [(datum, Ada.toValue amount)]]

This is very similar to the lockFunds we'd have written in the Contract monad directly. The difference being that here we used an arbitrary MonadBlockChain monad. This technique enables us to use the same lockFunds for two purposes:

  1. generating the transaction, since the Contract monad is an instance of MonadBlockChain, and

  2. writing tests for the on-chain validators using the cooked-validators facilities.

Let's say that we've also defined the unlockFunds transactions (code to use), so that cooked-validators will interact seamlessly with the Contract monad. In fact, we can define the endpoints function just like in the tutorial:

endpoints :: (AsContractError e) => Promise w SplitSchema e ()
endpoints = selectList [lock, unlock]
  where
    lock = endpoint @"lock" (lockFunds . mkSplitData)
    unlock = endpoint @"unlock" (const unlockFunds)

Testing the contract

Because we have defined the first layer of our off-chain code (which generates and submits raw transactions) with cooked-validators, we can use its testing infrastructure to test the on-chain validators. A basic test of whether it is possible to unlock funds that have been locked could look like this:

unlockPossible1 = assertSucceeds $ do
  lockFunds sd `as` wallet 1 -- sends the lockFunds pretending to be user 1,
  unlockFunds `as` wallet 2 -- sends the unlockFunds pretending to be user 2.
where
  -- makes a split of 10 ada between users 2 and 3 that only those users should be able to unlock.
  sd = SplitData (wallet 2) (wallet 3) 10

Here, the as combinator only works when testing code and it enables us to interact with our contract in the same way as many users would.

The function unlockPossible1 is a unit test that checks whether something good happens. We can just as easily test that something bad does not happen:

unlockImpossible1 = assertFails $ do
  lockFunds sd `as` wallet 1
  unlockFunds `as` wallet 5 -- user 5 shouldn't be able to unlock the funds.
where
  sd = SplitData (wallet 2) (wallet 3) 10

We can also use these tests as property-based tests. In this case, the property being tested is that either of the two recipients of the split can always unlock:

unlockProp1 = forAllTr tr assertSucceeds
  where
    tr = do
      -- generates a random SplitData
      sd <- genSplitData
      -- generates a random wallet; anyone can lock funds.
      w <- genArbitraryWallet
      lockFunds sd `as` w
      -- but only the recipients can unlock the funds
      unlocker <- choose [ recipient1 params , recipient2 params ]
      unlockFunds `as` unlocker

Additionally, if one of our tests fails, we will receive a readable summary of the transactions that caused the test to fail. Here's an excerpt of the first three transactions from a test failure of a more involved validator:

1) ValidateTxSkel
     - Signers: [wallet #1 (a2c20c)]
     - Label: ProposalSkel 2(Payment{paymentAmount = 4200000,paymentRecipient = a96a66})
     - Constraints:
        /\ Mints
            - (18ab4cc $ "threadToken"): 1
            - Policies: [18ab4c]
        /\ PaysScript script 9d52e00:
            - Accumulator{payment = Payment{paymentAmount = 4200000,paymentRecipient = a96a66},signees = []}
              { Lovelace: 6200000
                (18ab4cc $ "threadToken"): 1 }

2) ValidateTxSkel
     - Signers: [wallet #1 (a2c20c)]
     - Constraints:
        /\ PaysScript script 9d52e00:
            - Sign{signPk = a2c20c,signSignature = 8fef22}
              Lovelace: 1

3) ValidateTxSkel
     - Signers: [wallet #2 (80a4f4)]
     - Constraints:
        /\ PaysScript script 9d52e00:
            - Sign{signPk = 80a4f4,signSignature = 6853e0}
              Lovelace: 1
...

The trace that is displayed to the developer contains all the information necessary to debug the issue, and it tries to present the information in a readable manner.

In addition to property-based testing, cooked-validators also provides the ability to modify transactions in a trace according to some function. This can simulate an attack in many different ways. For example, writing a test such as:

attackNotPossibleOnSplit = assertFails $ do
  somewhere doAttack $ do
    lockFunds sd `as` wallet 1
    unlockFunds `as` wallet 2
 where
  sd = SplitData (wallet 2) (wallet 3) 10

will cause cooked-validators to attempt to execute two tests, both of which should fail, as follows:

  1. modify the lockFunds sd transaction according to doAttack, then submit; then submit an unmodified unlockFunds or

  2. submit lockFunds sd; then modify and submit unlockFunds according to doAttack.

The details of the somewhere combinator can get a little complex, hence we will omit it here. There is a separate blog post giving the technical details on the Tweag blog for those who are interested.

Related libraries and conclusion

Although Plutus already supports a form of property-based testing of contract endpoints with its ContractModel class, it does not provide transaction-level testing. Transaction-level testing is very important for us at Tweag. When auditing a Plutus contract, we need to be able to act like an attacker and modify transactions to study how the validators will react.

By using cooked-validators to develop your off-chain code, you will be able to test many safety and correctness properties of your on-chain code, and this can greatly increase your confidence in the correctness of the code. That can save time and money during an audit. In fact, the first step of a Tweag audit is to write the transaction-generating code using cooked-validators, to then be able to interact with our client's infrastructure freely.

Plutus fee estimator: find out the cost of transacting on Cardano

Our new fee estimator – released today – will help developers estimate the cost of smart contract scripts for maximum efficiency and minimum cost

21 January 2022 Kevin Hammond 6 mins read

Plutus fee estimator: find out the cost of transacting on Cardano

The ‘Alonzo’ smart contract upgrade deployed to the Cardano mainnet in September 2021 turned Cardano into a functional platform for the development of decentralized applications (DApps) built in Plutus.

With the Cardano ecosystem steadily growing, a great number of DApps are being built and readied for launch on Cardano. Either in final testing, deployment, or active development, Cardano will soon host a variety of DApps covering DeFi offerings, NFT markets, wallets, exchanges, games, and more.

The deterministic design of the Cardano ledger allows developers to predict how much they will pay for contract execution and there’s no fee for contract failure. Deterministic transaction processing, low fees, and security – all of these are major benefits of transacting and building on Cardano. Here, we’ll take a closer look at Cardano pricing and introduce a new Plutus fee estimator developed to provide better clarity on the processing fees.

The benefits of building on Cardano

Many factors influence a blockchain's price competitiveness: functionality, quality, security, and, of course, liquidity.

The design principles underpinning the Cardano ledger ensure high performance while respecting rigorous security properties. Cardano uses an Extended Unspent Transaction Output (EUTXO) accounting model, which greatly contributes to its deterministic design. Determinism refers to the predictability of outcomes. This means that Cardano transactions and scripts can be validated locally (off-chain), so letting the user know if a transaction is valid before executing it on-chain and without paying any fees. Moreover, transaction fees are fixed and predictable. To compare, smart contract execution costs on Ethereum vary depending on the network load, with fees fluctuating from $5 to hundreds of dollars (see The ridiculously high cost of Gas on Ethereum). Moreover, even failed Ethereum transactions may also incur fees, creating additional uncertainty about pricing.

In contrast, on Cardano, users can calculate the potential fees for transaction processing in advance. Because the user knows in advance whether the transaction is valid or not, there is no need to pay for a transaction that will potentially fail. This saves wasting funds and eliminates on-chain failures. Cardano’s execution fee in ada is always stable as it depends on pre-set network protocol parameters rather than on varying network congestion factors, for example.

Cardano’s pricing model relies on demand over supply

Cardano’s approach to price setting mainly relies on market demand over actual supply. With smart contract support on Cardano, there is now more than one type of demand competing for the common supply. Thus, it is crucial to consider both relative and absolute pricing. One way to do this is to inspect the effects of smart contract pricing, non-fungible token (NFT) operations, etc., with respect to some common value – in our case, the consumption of Cardano’s processing power.

With Cardano, smart contract pricing model is based on a fixed cost, which relies on the pricing of the spent resources (UTXO size or computation/memory used when running).

Fees must be paid to fairly compensate for stake pool operation (SPO) work and for the resources used to validate network transactions. In addition, making sure that any particular way of using Cardano is not substantially cheaper than another helps mitigate whole classes of adversarial attacks (e.g. a classic DDoS attack).

Flexibility is also key and an important feature of the Cardano protocol as it provides the possibility to change its parameters and adapt to price fluctuations. For example, if ada significantly increases in value, protocol parameters can, if required, be adjusted to prevent the user from overpaying for smart contract execution.

Plutus fee estimator

The Plutus fee estimator tool has been developed by IOG for price benchmarking and comparison. Today we are making it available for developers or curious Cardano users on our public testnet site. The estimator uses information from real-world Plutus transactions to predict the fees that will be charged for a transaction. The estimator can be used to calculate fees for actual transactions (e.g. to determine the fees that will be charged if the network parameters change), and also to estimate fees for individual script transactions or complete DApps before or during development. It may also be useful to determine the effect of script changes or optimizations on fees.

The estimator uses the same fee calculation formula as the actual Cardano node. Given sufficiently accurate inputs, it can give an accurate idea of the required fee. By combining the costs from multiple transactions, a user can easily predict how much a whole DApp might cost. This will be valuable for developers, business analysts, etc. The estimator includes a number of examples based on real transactions that have been verified against actual fees.

Fee calculation requires three pieces of information:

  • The total on-chain transaction size in bytes: a simple transaction, for example, is around 300 bytes, one with metadata is around 650 bytes, and Plutus scripts are typically 4,000-8,000 bytes (future optimizations will reduce this).
  • The number of computational (CPU) steps that the script uses: each step represents 1 picosecond of execution time on a benchmark machine. Typical scripts should consume less than 1,000,000,000 CPU units (1 millisecond).
  • The number of memory units that the script uses: this represents the number of bytes that the script allocates. Typical scripts should consume less than 1,000,000 memory units (1MB of memory allocation).

Let’s take a look at example Plutus scripts to understand their execution costs (Figure 1):

Figure 1. Estimated fees for script processing on Cardano

The estimator shows that sending a simple transaction would be as cheap as 0.17 ada, whereas the maximum possible cost for a single script would be 2.17 ada.

The calculation can be extended to DApp execution (see Figure 2). For example, a DApp using three transactions (one simple and two script transactions) might cost ~ 1.50 ada.

Figure 2. Estimated fees for DApp execution on Cardano

The final word

The Alonzo HFC event enabled Plutus script execution on the Cardano mainnet. This was really just the beginning of the journey for Cardano smart contracts. Now, with the launch of major smart contracts projects, we can begin the process of optimization and scaling. This includes the ongoing assessment of actual, real-world smart contract usage.

We need to balance the needs of the user and what is good for the network, speed versus correctness, and – as ever – striking that balance between security, scalability, and decentralization.

Future code/script optimizations and system performance improvements will help to refine the Cardano fees model over time. Together with our developer and stake pool operator communities, we will monitor the growth of smart contracts, optimize the Cardano node and the Plutus interpreter implementations, and make other adjustments to best support our user base in terms of fair and predictable transaction fees.

Check out the Plutus fee estimator on testnets.cardano.org and be sure – you can easily estimate the processing fee without losing your funds in case of transaction failure.