Developer challenge: using blockchain to support the UN’s sustainable development goals

IOHK has set up a $10,000 fund to invest in ideas for sustainable development based on Cardano.

6 October 2020 Eric Czuleger 3 mins read

Developer challenge: using blockchain to support the UN’s sustainable development goals

Creating a decentralized financial and social operating system for the world is the core mission of Cardano. But it’s not one that we can accomplish alone. That’s why we are always on the lookout for relationships which help us build a global foundation for growth. So, we’re thrilled to announce our hackathon challenge to support the UN’s sustainable development goals (SDGs) designed to accelerate progress on fighting hunger, injustice, and climate change.

Sustainability and blockchain

In this hackathon challenge we aim to give the blockchain community an opportunity to make an impact on international development. The challenge will draw on IOHK’s expertise in community-focused funding developed with Project Catalyst. This initiative brings innovation, voting, and decentralized funding to Cardano by crowdsourcing development proposals, and financing their implementation.

IOHK and United Nations personnel will use the Project Catalyst platform to find and fund initiatives that align with the UN’s Sustainable Development Goals. These goals were adopted by 193 world leaders in 2015. Each of the 17 targets focus on ending extreme poverty and hunger, fighting inequality and injustice, and tackling climate change by 2030.

This IOHK-sponsored challenge hopes to promote projects based in the digitization of finance which increase the efficacy and transparency of funding for the UN’s Decade of Action. In the run-up to the 2030 deadline for achieving the global sustainability goals, the UN is marking 75 years since its establishment. Given that the transnational organization works on global collective action problems it has engaged with blockchain technology as a solution.

Crowdsourcing the future

Participants in the program can put forward ideas focused on any of the 17 goals. To encourage participation, IOHK is sponsoring a prize fund of ada worth $10,000 as well as ongoing support to bring the projects to fruition. Proposals will be judged by a panel of IOHK and UN employees. They will determine the winners based on an idea’s technical prowess, scalability and social impact, as well as its financial and volunteer support. The winning ideas will be able to seek the advice of experts from both the UN and IOHK to ensure that they are implemented in the most impactful way.

To qualify for the scheme, entries must be open source and be created for use on the Cardano blockchain. Example code should be written in Marlowe, a domain specific language developed for financial contracts on Cardano. These do not need to be fully coded submissions. Instead they can be ideas which inspire anyone to get involved with blockchain technology and sustainable development. The proposal submission period opens on Saturday October 10th. Participants must be registered by then in order to submit. Entries must be finalized by October 18 at 11:59 MDT. Make sure to check the official rules to learn more.

Winners will be announced on October 24, United Nations Day, which marks the anniversary of the charter of the organization. We encourage everyone with an interest in using Cardano to achieve sustainability goals to get involved. Make your voice heard to help the UN’s Decade of Action now. If you are interested more generally in developing Cardano, join Project Catalyst on Ideascale.

Being lazy without getting bloated

Haskell nothunks library goes a long way towards making memory leaks a thing of the past

24 September 2020 Edsko de Vries 25 mins read

Being lazy without getting bloated

In our Developer Deep Dive series of occasional technical blogs, we invite IOHK’s engineers to discuss their latest work and insights.

Haskell is a lazy language. The importance of laziness has been widely discussed elsewhere: Why Functional Programming Matters is one of the classic papers on the topic, and A History of Haskell: Being Lazy with Class discusses it at length as well. For the purposes of this blog we will take it for granted that laziness is something we want. But laziness comes at a cost, and one of the disadvantages is that laziness can lead to memory leaks that are sometimes difficult to find. In this post we introduce a new library called nothunks aimed at discovering a large class of such leaks early, and helping to debug them. This library was developed for our work on the Cardano blockchain, but we believe it will be widely applicable in other projects too.

A motivating example

Consider the tiny application below, which processes incoming characters and reports how many characters there are in total, in addition to some per-character statistics:

import qualified Data.Map.Strict as Map

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
  deriving (Show)

type Stats = Int

update :: AppState -> Char -> AppState
update st c = st {
      total = total st + 1
    , indiv = Map.alter (Just . aux) c (indiv st)
    aux :: Maybe Stats -> Stats
    aux Nothing  = 1
    aux (Just n) = n + 1

initAppState :: AppState
initAppState = AppState {
      total = 0
    , indiv = Map.empty

main :: IO ()
main = interact $ show . foldl' update initAppState

In this version of the code, the per-character statistics are simply how often we have seen each character. If we feed this code ‘aabbb’, it will tell us that it saw 5 characters, 2 of which were the letter ‘a’ and 3 of which were ‘b’:

# echo -n aabbb | cabal run example1
AppState {
    total = 5
  , indiv = fromList [('a',2),('b',3)]

Moreover, if we feed the application a ton of data and construct a memory profile,

dd if=/dev/zero bs=1M count=10 | cabal run --enable-profiling example1 -- +RTS -hy

we see from Figure 1 that the application runs in constant space.

Figure 1. Memory profile for the first example

Figure 1. Memory profile for the first example

So far so good. But now suppose we make an innocuous-looking change. Suppose, in addition to reporting how often every character occurs, we also want to know the offset of the last time that the character occurs in the file:

type Stats = (Int, Int)

update :: AppState -> Char -> AppState
update st c = -- .. as before
    aux :: Maybe Stats -> Stats
    aux Nothing       = (1     , total st)
    aux (Just (n, _)) = (n + 1 , total st)

The application works as expected:

# echo -n aabbb | cabal run example2
AppState {
    total = 5
  , indiv = fromList [('a',(2,1)),('b',(3,4))]

and so the change is accepted in GitHub's PR code review and gets merged. However, although the code still works, it is now a lot slower.

# time (dd if=/dev/zero bs=1M count=100 | cabal run example1)
real    0m2,312s

# time (dd if=/dev/zero bs=1M count=100 | cabal run example2)
real    0m15,692s

We have a slowdown of almost an order of magnitude, although we are barely doing more work. Clearly, something has gone wrong, and indeed, we have introduced a memory leak (Figure 2).

Figure 2. Memory profile for example 2

Figure 2. Memory profile for example 2

Unfortunately, tracing a profile like this to the actual problem in the code can be very difficult indeed. What’s worse, although our change introduced a regression, the application still worked fine and so the test suite probably wouldn’t have failed. Such memory leaks tend to be discovered only when they get so bad in production that things start to break (for example, servers running out of memory), at which point you have an emergency on your hands.

In the remainder of this post we will describe how nothunks can help both with spotting such problems much earlier, and debugging them.

Instrumenting the code

Let’s first see what usage of nothunks looks like in our example. We modify our code and derive a new class instance for our AppState:

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
  deriving (Show, Generic, NoThunks)

The NoThunks class is defined in the nothunks library, as we will see in detail later. Additionally, we will replace foldl' with a new function:

repeatedly :: forall a b. (NoThunks b, HasCallStack)
           => (b -> a -> b) -> (b -> [a] -> b)
repeatedly f = ..

We will see how to define repeatedly later, but, for now, think of it as 'foldl' with some magic sprinkled on top’. If we run the code again, the application will throw an exception almost immediately:

# dd if=/dev/zero bs=1M count=100 | cabal run example3
example3: Unexpected thunk with context
CallStack (from HasCallStack):
  error, called at shared/Util.hs:22:38 in Util
  repeatedly, called at app3/Main.hs:38:26 in main:Main

The essence of the nothunks library is that we can check if a particular value contains any thunks we weren’t expecting, and this is what repeatedly is using to make sure we’re not inadvertently introducing any thunks in the AppState; it’s this check that is failing and causing the exception. We get a HasCallStack backtrace telling us where we introduced that thunk, and – even more importantly – the exception gives us a helpful clue about where the thunk was:


This context tells us that we have an AppState containing a Map containing tuples, all of which were in weak head normal form (not thunks), but the tuple contained an Int which was not in weak head normal form: a thunk.

From a context like this it is obvious what went wrong: although we are using a strict map, we have instantiated the map at a lazy pair type, and so although the map is forcing the pairs, it’s not forcing the elements of those pairs. Moreover, we get an exception the moment we introduce the thunk, which means that we can catch such regressions in our test suite. We can even construct minimal counter-examples that result in thunks, as we will see later.

Using nothunks

Before we look at how the library works, let’s first see how it’s used. In the previous section we were using a magical function repeatedly, but didn’t see how we could define it. Let’s now look at this function:

repeatedly :: forall a b. (NoThunks b, HasCallStack)
           => (b -> a -> b) -> (b -> [a] -> b)
repeatedly f = go
    go :: b -> [a] -> b
    go !b []     = b
    go !b (a:as) =
        let !b' = f b a
        in case unsafeNoThunks b' of
              Nothing    -> go b' as
              Just thunk -> error . concat $ [
                  "Unexpected thunk with context "
                , show (thunkContext thunk)

The only difference between repeatedly and foldl' is the call to unsafeNoThunks, which is the function that checks if a given value contains any unexpected thunks. The function is marked as ‘unsafe’ because whether or not a value is a thunk is not normally observable in Haskell; making it observable breaks equational reasoning, and so this should only be used for debugging or in assertions. Each time repeatedly applies the provided function f to update the accumulator, it verifies that the resulting value doesn’t contain any unexpected thunks; if it does, it errors out (in real code such a check would only be enabled in test suites and not in production).

One point worth emphasizing is that repeatedly reduces the value to weak head normal form (WHNF) before calling unsafeNoThunks. This is, of course, what makes a strict fold-left strict, and so repeatedly must do this to be a good substitute for foldl'. However, it is important to realize that if repeatedly did not do that, the call to unsafeNoThunks would trivially and immediately report a thunk; after all, we have just created the f b a thunk! Generally speaking, it is not useful to call unsafeNoThunks (or its IO cousin noThunks) on values that aren’t already in WHNF.

In general, long-lived application state should never contain any unexpected thunks, and so we can apply the same kind of pattern in other scenarios. For example, suppose we have a server that is a thin IO layer on top of a mostly pure code base, storing the application state in an IORef. Here, too, we might want to make sure that that IORef never points to a value containing unexpected thunks:

newtype StrictIORef a = StrictIORef (IORef a)

readIORef :: StrictIORef a -> IO a
readIORef (StrictIORef v) = Lazy.readIORef v

writeIORef :: (NoThunks a, HasCallStack)
           => StrictIORef a -> a -> IO ()
writeIORef (StrictIORef v) !x = do
    check x
    Lazy.writeIORef v x

check :: (NoThunks a, HasCallStack) => a -> IO ()
check x = do
    mThunk <- noThunks [] x
    case mThunk of
      Nothing -> return ()
      Just thunk ->
        throw $ ThunkException
                  (thunkContext thunk)

Since check already lives in IO, it can use noThunks directly, instead of using the unsafe pure wrapper; but otherwise this code follows a very similar pattern: the moment we might introduce a thunk, we instead throw an exception. One could imagine doing a very similar thing for, say, StateT, checking for thunks in put:

newtype StrictStateT s m a = StrictStateT (StateT s m a)
  deriving (Functor, Applicative, Monad)

instance (Monad m, NoThunks s)
      => MonadState s (StrictStateT s m) where
  get    = StrictStateT $ get
  put !s = StrictStateT $
      case unsafeNoThunks s of
        Nothing -> put s
        Just thunk -> error . concat $ [
            "Unexpected thunk with context "
          , show (thunkContext thunk)

Minimal counter-examples

In some applications, there can be complicated interactions between the input to the program and the thunks it may or may not create. We will study this through a somewhat convoluted but, hopefully, easy-to-understand example. Suppose we have a server that is processing two types of events, A and B:

data Event = A | B
  deriving (Show)

type State = (Int, Int)

initState :: State
initState = (0, 0)

update :: Event -> State -> State
update A (a, b)    = let !a' = a + 1 in (a', b)
update B (a, b)
  | a < 1 || b < 1 = let !b' = b + 1 in (a, b')
  | otherwise      = let  b' = b + 2 in (a, b')

The server’s internal state consists of two counters, a and b. Each time we see an A event, we just increment the first counter. When we see a B event, however, we increment b by 1 only if a and b haven’t reached 1 yet, and by 2 otherwise. Unfortunately, the code contains a bug: in one of these cases, part of the server’s state is not forced and we introduce a thunk. (Disclaimer: the code snippets in this blog post are not intended to be good examples of coding, but to make it obvious where memory leaks are introduced. Typically, memory leaks should be avoided by using appropriate data types, not by modifying code.)

A minimal counter-example that will demonstrate the bug would therefore involve two events A and B, in any order, followed by another B event. Since we get an exception the moment we introduce an exception, we can then use a framework such as quickcheck-state-machine to find bugs like this and construct such minimal counter-examples.

Here’s how we might set up our test. Explaining how quickcheck-state-machine (QSM) works is well outside the scope of this blog post; if you’re interested, a good starting point might be An in-depth look at quickcheck-state-machine. For this post, it is enough to know that in QSM we are comparing a real implementation against some kind of model, firing off ‘commands’ against both, and then checking that the responses match. Here, both the server and the model will use the update function, but the ‘real’ implementation will use the StrictIORef type we introduced above, and the mock implementation will just use the pure code, with no thunks check. Thus, when we compare the real implementation against the model, the responses will diverge whenever the real implementation throws an exception (caused by a thunk):

data T

type instance MockState   T = State
type instance RealMonad   T = IO
type instance RealHandles T = '[]

data instance Cmd T f hs where
  Cmd :: Event -> Cmd T f '[]

data instance Resp T f hs where
  -- We record any exceptions that occurred
  Resp :: Maybe String -> Resp T f '[]

deriving instance Eq   (Resp T f hs)
deriving instance Show (Resp T f hs)
deriving instance Show (Cmd  T f hs)

instance NTraversable (Resp T) where
  nctraverse _ _ (Resp ok) = pure (Resp ok)

instance NTraversable (Cmd T) where
  nctraverse _ _ (Cmd e) = pure (Cmd e)

sm :: StrictIORef State -> StateMachineTest T
sm state = StateMachineTest {
      runMock    = \(Cmd e) mock ->
        (Resp Nothing, update e mock)
    , runReal    = \(Cmd e) -> do
        real <- readIORef state
        ex   <- try $ writeIORef state (update e real)
        return $ Resp (checkOK ex)
    , initMock   = initState
    , newHandles = \_ -> Nil
    , generator  = \_ -> Just $
        elements [At (Cmd A), At (Cmd B)]
    , shrinker   = \_ _ -> []
    , cleanup    = \_ -> writeIORef state initState
    checkOK :: Either SomeException () -> Maybe String
    checkOK (Left err) = Just (show err)
    checkOK (Right ()) = Nothing

(This uses the new Lockstep machinery in QSM that we introduced in the Munihac 2019 hackathon.)

If we run this test, we get the minimal counter-example we expect, along with the HasCallStack backtrace and the context telling us precisely that we have a thunk inside a lazy pair:

*** Failed! Falsified (after 6 tests and 2 shrinks):
  { unCommands =
      [ Command At { unAt = Cmd B } At { unAt = Resp Nothing } []
      , Command At { unAt = Cmd A } At { unAt = Resp Nothing } []
      , Command At { unAt = Cmd B } At { unAt = Resp Nothing } []


Resp (Just "Thunk exception in context [Int,(,)]
    called at shared/StrictIORef.hs:26:5 in StrictIORef
    writeIORef, called at app5/Main.hs:71:37 in Main")
:/= Resp Nothing

The combination of a minimal counter-example, a clear context, and the backtrace, makes finding most such memory leaks almost trivial.

Under the hood

The core of the nothunks library is the NoThunks class:

-- | Check a value for unexpected thunks
class NoThunks a where
  noThunks   :: [String] -> a -> IO (Maybe ThunkInfo)
  wNoThunks  :: [String] -> a -> IO (Maybe ThunkInfo)
  showTypeOf :: Proxy a -> String

data ThunkInfo = ThunkInfo {
      thunkContext :: Context
deriving (Show)

type Context = [String]

All of the NoThunks class methods have defaults, so instances can be, and very often are, entirely empty, or – equivalently – derived using DeriveAnyClass.

The noThunks function is the main entry point for application code, and we have already seen it in use. Instances of NoThunks, however, almost never need to redefine noThunks and can use the default implementation, which we will take a look at shortly. Conversely, wNoThunks is almost never useful for application code but it’s where most of the datatype-specific logic lives, and is used by the default implementation of noThunks; we will see a number of examples of it below. Finally, showTypeOf is used to construct a string representation of a type when constructing the thunk contexts; it has a default in terms of Generic.


Suppose we are checking if a pair contains any thunks. We should first check if the pair itself is a thunk, before we pattern match on it. After all, pattern matching on the pair would force it, and so if it had been a thunk, we wouldn’t be able to see this any more. Therefore, noThunks first checks if a value itself is a thunk, and if it isn’t, it calls wNoThunks; the w stands for WHNF: wNoThunks is allowed to assume (has as precondition) that its argument is not itself a thunk and so can be pattern-matched on.

noThunks :: [String] -> a -> IO (Maybe ThunkInfo)
noThunks ctxt x = do
    isThunk <- checkIsThunk x
    if isThunk
      then return $ Just ThunkInfo { thunkContext = ctxt' }
      else wNoThunks ctxt' x
    ctxt' :: [String]
    ctxt' = showTypeOf (Proxy @a) : ctxt

Note that when wNoThunks is called, the (string representation of) type a has already been added to the context.


Most of the datatype-specific work happens in wNoThunks; after all, we can now pattern match. Let’s start with a simple example, a manual instance for a type of strict pairs:

data StrictPair a b = StrictPair !a !b

instance (NoThunks a, NoThunks b)
      => NoThunks (StrictPair a b) where
  showTypeOf _ = "StrictPair"
  wNoThunks ctxt (StrictPair x y) = allNoThunks [
        noThunks ctxt x
      , noThunks ctxt y

Because we have verified that the pair itself is in WHNF, we can just extract both components, and recursively call noThunks on both of them. Function allNoThunks is a helper defined in the library that runs a bunch of thunk checks, stopping at the first one that reports a thunk.

Occasionally we do want to allow for selected thunks. For example, suppose we have a set of integers with a cached total field, but we only want to compute that total if it’s actually used:

data IntSet = IntSet {
      toSet :: !(Set Int)

      -- | Total
      -- Intentionally /not/ strict:
      -- Computed when needed (and then cached)
    , total :: Int
  deriving (Generic)

Since total must be allowed to be a thunk, we skip it in wNoThunks:

instance NoThunks IntSet where
  wNoThunks ctxt (IntSet xs _total) = noThunks ctxt xs

Such constructions should probably only be used sparingly; if the various operations on the set are not carefully defined, the set might hold on to all kinds of data through that total thunk. Code like that needs careful thought and careful review.

Generic instance

If no implementation is given for wNoThunks, it uses a default based on GHC generics. This means that for types that implement Generic, deriving a NoThunks instance is often as easy as in the AppState example above, simply saying:

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
  deriving (Show, Generic, NoThunks)

Many instances in the library itself are also defined using the generic instance; for example, the instance for (default, lazy) pairs is just:

instance (NoThunks a, NoThunks b) => NoThunks (a, b)

Deriving-via wrappers

Sometimes, we don’t want the default behavior implemented by the generic instance, but defining an instance by hand can be cumbersome. The library therefore provides a few newtype wrappers that can be used to conveniently derive custom instances. We will discuss three such wrappers here; the library comes with a few more.

Only check for WHNF

If all you want to do is check if a value is in weak head normal form (ie, check that it is not a thunk itself, although it could contain thunks), you can use OnlyCheckIsWhnf. For example, the library defines the instance for Bool as:

deriving via OnlyCheckWhnf Bool
         instance NoThunks Bool

For Bool, this is sufficient: when a boolean is in weak head normal form, it won’t contain any thunks. The library also uses this for functions:

deriving via OnlyCheckWhnfNamed "->" (a -> b)
         instance NoThunks (a -> b)

(Here, the Named version allows you to explicitly define the string representation of the type to be included in the thunk contexts.) Using OnlyCheckWhnf for functions means that any values in the function closure will not be checked for thunks. This is intentional and a subtle design decision; we will come back to this in the section on permissible thunks below.

Skipping some fields

For types such as IntSet where most fields should be checked for thunks, but some fields should be skipped, we can use AllowThunksIn:

deriving via AllowThunksIn '["total"] IntSet
         instance NoThunks IntSet

This can be handy for large record types, where giving the instance by hand is cumbersome and, moreover, can easily get out of sync when changes to the type (for example, a new field) are not reflected in the definition of wNoThunks.

Inspecting the heap directly

Instead of going through the class system and the NoThunks instances, we can also inspect the GHC heap directly. The library makes this available through the InspectHeap newtype, which has an instance:

instance Typeable a => NoThunks (InspectHeap a) where
  -- ..

Note that this does not depend on a NoThunks instance for a. We can use this like any other deriving-via wrappers, for example:

deriving via InspectHeap TimeOfDay
         instance NoThunks TimeOfDay

The advantage of such an instance is that we do not require instances for any nested types; for example, although TimeOfDay has a field of type Pico, we don’t need a NoThunks instance for it.

The disadvantage is that we lose all compositionality. If there are any types nested inside for which we want to allow for thunks, we have no way of overriding the behaviour of the no-thunks check for those types. Since we are inspecting the heap directly, and the runtime system does not record any type information, any NoThunks instances for those types are irrelevant and we will report any thunks that it finds. Moreover, when we do find such a thunk, we cannot report a useful context, because – again – we have no type information. If noThunks finds a thunk deeply nested inside some T (whose NoThunks instance was derived using InspectHeap), it will merely report "..." : "T" as the context (plus perhaps any context leading to T itself).

Permissible thunks

Some data types inherently depend on the presence of thunks. For example, the Seq type defined in Data.Sequence internally uses a finger tree. Finger trees are a specialized data type introduced by Ralf Hinze and Ross Paterson; for our purposes, all you need to know is that finger trees make essential use of thunks in their spines to achieve their asymptotic complexity bounds. This means that the NoThunks instance for Seq must allow for thunks in the spine of the data type, although it should still verify that there are no thunks in any of the elements in the sequence. This is easy enough to do; the instance in the library is:

instance NoThunks a => NoThunks (Seq a) where
  showTypeOf _   = "Seq"
  wNoThunks ctxt = noThunksInValues ctxt . toList

Here, noThunksInValues is a helper function that checks a list of values for thunks, without checking the list itself.

However, the existence of types such as Seq means that the non-compositionality of InspectHeap can be a big problem. It is also the reason that for functions we merely check if the function is in weak head normal form. Although the function could have thunks in its closure, we don’t know what their types are. We could check the function closure for thunks (using InspectHeap), but if we did, and that closure contained, say, a Seq among its values, we might incorrectly report an unexpected thunk. Because it is more problematic if the test reports a bug when there is none than when an actual bug is not reported, the library opts to check only functions for WHNF. If in your application you store functions, and it is important that these functions are checked for thunks, then you can define a custom newtype around a -> b with a NoThunks instance defined using InspectHeap (but only if you are sure that your functions don’t refer to types that must be allowed to have thunks).

Comparison with the heap/stack limit size method

In 2016, Neil Mitchell gave a very nice talk at HaskellX, where he presented a method for finding memory leaks (he has also written a blog post on the topic). The essence of the method is to run your test suite with much reduced stack and heap limits, so that if there is a memory leak in your code, you will notice it before it hits production. He then advocates the use of the -xc runtime flag to get a stack trace when such a ‘stack limit exhausted’ exception is thrown.

The technique advocated in this post has a number of advantages. We get an exception the moment a thunk is created, so the stack trace we get is often much more useful. Together with the context reported by noThunks, finding the problem is usually trivial. Interpreting the stack reported by -xc can be more difficult, because this exception is thrown when the limit is exhausted, which may or may not be related to the code that introduced the leak in the first place. Moreover, since the problem only becomes known when the limit is exhausted, minimal counter-examples are out of the question. It can also be difficult to pick a suitable value for the limit; how much memory does the test site actually need, and what would constitute a leak? Finally, -xc requires your program to be compiled with profiling enabled, which means you’re debugging something different to what you’d run in production, which is occasionally problematic.

Having said all that, the nothunks method does not replace the heap/stack limit method, but complements it. The nothunks approach is primarily useful for finding space leaks in pieces of data where it’s clear that we don’t want any thunk build-up, typically long-lived application state. It is less useful for finding more ‘local’ space leaks, such as a function accumulator not being updated strictly. For finding such leaks, setting stack/heap limits is still a useful technique.


Long-lived application data should, typically, not have any thunk build-up. The nothunks library can verify this through the noThunks and unsafeNoThunks function calls, which check if the supplied argument contains any unexpected thunks. These checks can then be used in assertions to check that no thunks are created. This means that if we do introduce a thunk by mistake, we get an immediate test failure, along with a callstack to the place where the thunk was created as well as a context providing a helpful hint on where the thunk is. Together with a testing framework, this makes memory leaks much easier to debug and avoid. Indeed, they have mostly been a thing of the past in our work on Cardano since we started using this approach.

Project Catalyst; introducing our first public fund for Cardano community innovation

An exciting experiment to start building the future of Cardano

16 September 2020 Dor Garbash 4 mins read

Project Catalyst; introducing our first public fund for Cardano community innovation

Today, we are announcing the launch of Project Catalyst’s first public fund, an important first step into the world of on-chain governance, treasury, and community innovation for Cardano.

The public fund launch follows five months of intense activity, across two previous trial ‘funds’. ‘Fund 0’ was the very first experiment, using a focus group made up from IOG team members. ‘Fund 1’ was the first time we introduced the idea to the Cardano community, recruiting a group of some 50 volunteers to help us develop the platform and processes. While this voting cycle did not offer ‘real’ funding, it served as a hugely valuable way to give our team and the Cardano community members a chance to test drive and refine the emerging process.

We still have a way to go. But with the community's support, we want to maintain the pace of progress we have already set. If Fund 0 was the technical runthrough, Fund 1 was the dress rehearsal. Announced today, Fund 2 is the opening night where the star acts of the community get the chance to compete for the funding to bring their project center stage.

Funding the future

We have learned a lot since we kicked off the private phase of the program in August. Support from our pioneer group of 50 community participants helped us to identify areas for improvement so we could develop and refine the process before opening things up more widely. We learned that providing clear documentation and guidelines helps the community engage more deeply and focus on the ideas. We also learned we can provide alternative avenues for the community to discuss “meta” proposals. This enhances the Catalyst and Voltaire processes while focusing attention on writing impactful proposals. Furthermore, we realized that it was important for IOHK to support individuals writing proposals to ensure their ideas were fairly represented.

Cardano will thrive by unlocking the creative potential of our global community. Our voting protocol will only be as good as the ideas which feed into it. To that end, we are developing a guide to help anyone develop the best possible proposal for Fund 2 and beyond.

The first public fund we’re announcing today contains up to $250k-worth of ada, which the community can access. Anyone can bring their idea and create a proposal. Through a public vote, ‘winning’ proposals will begin a development process.

Initially, we’re keeping the focus tight, asking the community to address a challenge statement: “How can we encourage developers and entrepreneurs to build Dapps and businesses on top of Cardano in the next 6 months?” Funding proposals (or ‘FPs’) can address this with a wide variety of ideas – from marketing initiatives and infrastructure development, to business planning and content creation.

The first stage will be to ‘explore the challenge’, asking members of the community to provide their perspectives. Next, we’ll encourage everyone to put their best ideas forward on the innovation platform to collaborate and discuss via a dedicated Telegram chat channel.

A public vote

After the initial ideation, collaboration and proposal stages, we’ll be putting things to a vote. Proposals can be reviewed either on the innovation platform or on the new mobile voting application, currently in development. When it comes time to vote, all participants will need to register to vote through the voting app. The ‘right’ to vote will be linked to each participant’s ada holdings and will earn them additional rewards for voting. Participating in this first funding round will not prevent ada holders from delegating their ada and earning rewards as normal. Voting effectively functions like a ‘transaction’, allowing all participants to cast their vote to indicate ‘yes,’ or ‘no.’ We’ll share further information on the app and the voting process in a future blog post.

Collaboration and innovation

Voltaire will be a crucial building block in the Cardano ecosystem, because it allows every ada holder to be involved in making decisions on the future development of the platform and contribute to the growth of the ecosystem. Project Catalyst is the important first component in delivering that capability. The early stages of this experiment have already demonstrated the passion and commitment of the Cardano community to develop this further. With the introduction of an on-chain voting and treasury system, network participants will be able to use their stake and voting rights to steer Cardano towards tackling shared goals, in a democratic and self-sustaining way.

To find out more, you can watch our announcement Crowdcast with Charles Hoskinson and Project Catalyst’s product manager, Dor Garbash.

Project Catalyst and Voltaire bring power to the people

Establishing a long-term future for Cardano growth has begun with a treasury and democratic voting in the Catalyst project

10 September 2020 Bingsheng Zhang 7 mins read

Project Catalyst and Voltaire bring power to the people

Designing a groundbreaking proof-of-stake blockchain means it is vital to ensure that the system is self-sustainable. This will allow it to drive growth and maturity in a truly decentralized and organic way. Voltaire is IOHK’s way of establishing this capability, allowing the community to maintain the Cardano blockchain while continuing to develop it by proposing and implementing system improvements. This puts the power to make decisions in the hands of ada holders.

Rigorous research lies at the heart of building a solid blockchain. July’s Shelley summit included a presentation on the importance of funding for the growth of Cardano. This was based on research between Lancaster University and IOHK into the notion of a treasury system and an effective, democratic approach to funding Cardano’s long-term development. IOHK has now applied treasury mechanism capabilities in Project Catalyst, which combines research, social experiments and community consent to establish an open, democratic culture within the Cardano community.

A democratic approach

With the rapid growth of blockchain technology, the world is witnessing the emergence of platforms across a variety of industries. Technological growth and maturity are essential for long-term blockchain sustainability and development. That is why someone has to support and fund growth and system enhancements. A democratic approach is an integral part of the blockchain ecosystem because it allows sustainability decisions to be made collaboratively, without relying on a central governing entity. Thus, the governing and decision-making process must be collective. This will allow users to understand how improvements are made, who makes decisions, and ultimately where the funding comes from to make these choices.

Long-term sustainability

There are several ways to raise capital for development purposes. Donations, venture capital funding, and initial coin offerings (ICOs) are the most common. However, although such models may work for raising initial capital, they rarely ensure a long-term funding source or predict the amount of capital needed for development and maintenance. In addition, these models suffer from centralized control, making it difficult to find a consensus meeting the needs and goals of everyone.

To establish a long-term funding source for blockchain development, some cryptocurrency projects apply taxation, taking a percentage from fees or rewards, and accumulating them in a separate pool – a treasury. Treasury funds can then be used for system development and maintenance purposes. In addition, treasury reserves undergo inflation as the value of cryptocurrencies grows. This grants another potential source of funds accretion.

However, funding systems are often at risk of centralization when making decisions on guiding development. In these systems, only certain individuals in the organization or company are empowered to make decisions on how to use available funds and for which purposes. Considering that the decentralized architecture of blockchain makes it inappropriate to have centralized control over funding, disagreement can arise among organization members and lead to complex disputes.

Treasury systems and Cardano

A number of treasury systems have arisen to address the problems. These systems may consist of iterative treasury periods during which project funding proposals are submitted, discussed, and voted on. However, common drawbacks include poor voter privacy or ballot submission security. In addition, the soundness of funding decisions can be compromised if master nodes are subject to coercion, or a lack of expert involvement might encourage irrational behavior.

As a third-generation cryptocurrency platform, Cardano was created to solve the difficulties encountered by previous platforms.

Cardano aims to bring democracy to the process, giving power to everyone and so ensuring that decisions are fair. For this, it is crucial to put in place transparent voting and funding processes. This is where Voltaire comes in.

The paper on treasury systems for cryptocurrencies introduces a community-controlled, decentralized, collaborative decision-making mechanism for sustainable funding of blockchain development and maintenance. This approach to collaborative intelligence relies on ‘liquid democracy’ – a hybrid of direct and representative democracy that provides the benefits of both systems.

This approach enables the treasury system to take advantage of expert knowledge in a voting process, as well as ensuring that all ada holders are granted an opportunity to vote. Thus, for each project, a voter can either vote directly or delegate their voting power to a member of the community who is an expert on the topic.

To ensure sustainability, the treasury system is controlled by the community and is refilled constantly from potential sources such as:

  • some newly-minted coins being held back as funding
  • a percentage of stake pool rewards and transaction fees
  • additional donations or charity

Because funds are being accumulated continually, it will be possible to fund projects and pay for improvement proposals.

So, the funding process can consist of ‘treasury periods’, with each being divided into the following phases:

  • pre-voting
  • voting
  • post-voting

During each period, project proposals may be submitted, discussed by experts and voters, and finally voted for to fund the most essential projects. Even though anyone can submit a proposal, only certain proposals can be supported depending on their importance and desirability for network development.

Voting and decision making

To understand which project should be funded first, let’s discuss the process of decision-making.

Ada holders who participate in treasury voting include scientists and developers, management team members, and investors and the general public. Each of these may have different imperatives for the growth of the system, and that is why there has to be a way to make these choices and desires work together.

For this, the voting power is proportional to the amount of ada someone owns; the more ada, the more influence in making decisions. As part of the liquid democracy approach, as well as direct yes/no voting, an individual may delegate their voting power to an expert they trust. In this case, the expert will be able to vote directly on the proposal they regard as the most important. After the voting, project proposals may be scored based on the number of yes/no votes and be shortlisted; the weakest project proposals will be discarded. Then, shortlisted proposals can be ranked according to their score, and the top-ranked proposals will be funded in turn until the treasury fund is exhausted. The strategy of dividing the decision-making process into stages allows reaching consensus on the priority of improvements.

To ensure voter privacy, the research team has invented an ‘honest verifier zero-knowledge proof for unit vector encryption with logarithmic size communication’. Zero-knowledge techniques are mathematical methods used to verify things without revealing any underlying data. In this case, the zero-knowledge proof means that someone can vote without revealing any information about themselves, other than that they are eligible to vote. This eliminates any possibility of voter coercion.

Treasury prototypes have been created at IOHK for benchmarking. Implementing the research as the basis of Voltaire will help to deliver reliable and secure means for treasury voting and decision-making. Project Catalyst is an experimental treasury system that combines proposal and voting procedures focusing on the establishment of a democratic culture within the Cardano community. Initially, Cardano’s treasury will be refilled from a percentage of stake pool rewards ensuring a sustainable treasury source. Other blockchains have treasury systems, but IOHK’s combines complete privacy through zero-knowledge proofs, liquid democracy from expert involvement and vote delegation, and participation for all, not just a governing entity. This should encourage participation, incentivization, and decentralization for making fair and transparent decisions.

It is also important to note that this treasury system mechanism can be implemented on a variety of blockchains, not just Cardano. It has already been proposed implementing it for Ethereum Classic. In the process, treasury systems can help everyone to understand how a network will develop.

After a successful closed user group trial earlier which started this summer, Project Catalyst will very soon be opened up to its first public beta program. Although it is still early days for Cardano on-chain governance, we look forward to a bright future, with the community lighting the way. So, please follow the blog for updates on Voltaire and how Project Catalyst is paving the way to Cardano’s sustainability.

The decline and fall of centralization

This week marks the first step in the road to the full decentralization of Cardano, as stake pools begin to take responsibility for block production. Here’s what the journey will look like.

14 August 2020 Kevin Hammond 12 mins read

The decline and fall of centralization

Full decentralization lies at the heart of Cardano’s mission. While it is not the only goal that we're focused on, in many ways, it is a goal that will enable and accelerate almost every other. It is integral to where we want to go as a project.

It is also where the philosophical and technical grounding of the entire Cardano project meets its community, in very real and tangible ways. This is why we have done a lot of thinking on how to achieve decentralization effectively, safely, and with the health of the ecosystem front of mind.

Defining decentralization

Let’s start by explaining what we mean by decentralization. This is a word that is fraught with challenge, with several competing meanings prevalent in the blockchain community.

For us, decentralization is both a destination and a journey. Shelley represents the first steps toward a fully decentralized state; from the static, federated approach of Byron to a fully democratic environment where the community not only runs the network, but is empowered and encouraged to take decisions through an on-chain framework of governance and voting.

True decentralization lies at the confluence of three essential components, working together in unison.

  • Networking - where geographically distributed agents are linked together to provide a secure and robust blockchain platform.
  • Block production - where the work of building and maintaining the blockchain is distributed across the network to a collection of cooperating stake pools.
  • Governance - where decisions about the blockchain protocol and the evolution of Cardano are taken collectively by the community of Cardano stakeholders.

Only when all these factors exist within a single environment can true decentralization be said to have been achieved successfully.

Key parameters that affect decentralization

Let's talk about d, maybe.

The d-parameter performs a pivotal role in controlling the decentralization of block production. Decentralization is a spectrum, of course, rather than an absolute. In simple terms, d controls ‘how’ decentralized the network is. For example, at one extreme, d=1 means that block production is fully centralized. In this state, IOG’s core nodes produce all the blocks. This was how Byron operated,

Conversely, once d=0, and decentralized governance is in place and on chain, ‘full’ decentralization will have been achieved. At this point, stake pool operators produce all the blocks (block production is 100% decentralized), the community makes all the decisions on future direction and development (governance is decentralized), and a healthy ecosystem of geographically distributed stake pools are connected into a coherent and effective network (the network is decentralized). We will have reached our decentralization goal.

The journey that d will take from 1 to 0 is a nuanced one that requires a careful balance between the action of the protocol and the reaction of the network and its community. Rather than declining instantly, d will go through a period of ‘constant decay’ where it is gradually decremented until it reaches 0. At this point Cardano will be fully decentralized. This gradual process will allow us to collect performance data and to monitor the state of the network as it progresses towards this all-important point. A parameter-driven approach will help provide the community with transparency and a level of predictability. Meanwhile, we’ll be monitoring the results carefully; there will always be socio-economic and market factors to consider once ‘in the wild’.

How will the d parameter change over time

The evolution from 1 to 0 is relatively simple:

When d=1, all blocks are produced by IOG core nodes, running in Ouroboros Byzantine Fault Tolerance (OBFT) mode. No blocks are produced by stake pool operators (running in Ouroboros Praos mode). All rewards go to treasury.

When d=0, the reverse becomes true: every block will be produced by stake pools (running in Praos mode), and none by the IOG core nodes. All rewards go to stake pools, once the fixed treasury rate is taken.

In between these extremes, a fraction of the blocks will be produced by the core nodes, and a fraction by the stake pools. The precise amounts are determined by d. So when d reaches 0.7, for example, 70% of the blocks will be produced by the core nodes and 30% will be produced by stake pools. When d subsequently reaches 0.2, 20% of the blocks will be produced by the core nodes, and 80% by the stake pools.

It is important to note that regardless of the percentage of blocks that are produced by the stake pools, however, once d < 1, all the rewards will go to stake pools in line with the stake that they hold (after the fixed treasury percentage is taken), and none to the core nodes. This means that IOG has absolutely no incentive to keep the d parameter high. In fact, when d reaches zero, IOG will be able to save the costs of running the core nodes, which are not insubstantial.

Like many other ada holders, IO Global is currently running a number of stake pools on the mainnet. As the creator of the Cardano platform, IO Global naturally has a significant stake in its success from fiscal, fiduciary, and security aspects, and this success will be built on a large number of effective and decentralized pools. As a commercial entity, IO needs to generate revenue from its stake, while recognizing the part it needs to play within an ecosystem of stake pools, helping to grow and maintain the health of the network as we move towards full decentralization. In the medium term, we will follow a private/public/community delegation approach, similar to that we adopted on the ITN, spreading our stake across both IOG and community pools. In the short term, however, we are running IOG pools on the mainnet, establishing a number of our own pools that can take some of the load from our core nodes. Using our stake and technical expertise to secure and stabilise the network is an important element at first, but one that will become less important as the d parameter decreases. The road to decentralization will offer many opportunities for pools of all sizes to establish themselves and thrive along the way.

d-parameter diagram

The key milestones of the d journey

d<1.0 (Move away from centralization)

The first milestone happened on August 13 at the boundary of epoch 210 and 211 when the d parameter first dropped below 1.0. At this point, IOG's core nodes started to share the block production of blocks with community stake pools. This marks the beginning of the road to full decentralization.

d=0.8 (Stake pools produce 20% of blocks)

At 0.8, more pools (double the number compared to d=0.9) will get the opportunity to create blocks and establish themselves. At this level, pools won’t suffer in the rankings as long as they create one of the allocated blocks and get rewards. This way, we believe we can start growing the block-minting proportion of the network, at low network risk.

d<0.8 (Stake pool performance taken into account)

The next major milestone will happen when d drops below 0.8. Below that level, each pool's performance will be taken into account when determining the rewards that it receives. Above that level, however, the pool’s performance is ignored. The reason for this is to avoid unfairness to pools when they are only expected to produce a few blocks.

d<0.5 (Stake Pools Produce the Majority of Blocks)

When d drops below 0.5, stake pools will produce the majority of blocks. The network will have reached a tipping point, where decentralization is inevitable.

Before taking this dramatic step, we will ensure that two critical features are in place: peer-to-peer (P2P) pool discovery and protocol changes to enable community voting. These will enable us to make the final push to full and true decentralization The recently announced Project Catalyst program was the first step in this, concurrent journey to full on-chain governance.

d=0 (Achieve Full Decentralization)

As soon as the parameter reaches 0, the IOG core nodes will be permanently switched off.

IOG will continue to run its own stake pools that will produce blocks in line with the stake they attract, just like any other pools. But these will no longer have any special role in maintaining the Cardano network. It will also, of course, delegate a substantial amount of its stake to community pools. Simultaneously, the voting mechanism will be enabled, and it will no longer be possible to increase d and ‘re-centralize’ Cardano.

At this point in time, we will have irrevocably entered a fully decentralized Cardano network. Network + block production + on-chain governance = decentralization.

Rate of constant decay

The progressive decrement of d is known as constant decay. The gradual decrease will give us the chance to monitor the effects of each decrement on the network and to make adjustments where necessary. As the parameter decreases, more stake pools will also be able to make blocks, since the number of blocks that are made by the pools will increase, and less stake will then be required for each block that is made.

The key factors driving this decrease will be:

  • The resilience and reliability of the network as a whole.
  • The number of effective block-producing pools.
  • The amount of the total stake that has been delegated.

Here’s our current thinking on what implementation might look like:

Constant decay timeline

We will then likely pause before dropping the parameter below 0.5 to ensure that the two key conditions described above are met:

  • The implementation of the new Peer-to-Peer pool discovery mechanism has been released and is successfully in use;
  • We have successfully transitioned the first hard fork in the Shelley era, which will introduce the basis for community voting on protocol parameters, and other important protocol changes

We will resume the countdown to d=0 at a similar rate, pausing again if necessary before finally transitioning to d=0 in March 2021.

Other factors that affect decentralization: Saturation threshold

A second parameter – k – is used to drive growth in the number of pools by encouraging delegators to spread their stake. By setting a cap on the amount of stake that earns rewards (the saturation threshold), new delegators are directed towards pools that have less stake. In ideal conditions, the network will stabilise towards the specific number of pools that have been targeted. In practice, we saw from the ITN that many more pools than this number were supported by the setting that we chose.

The k parameter was set to 150 at the Shelley hard fork. This setting was chosen to balance the need to support a significant number of stake pools from the start of the Shelley era against the possibility that only a small number of effective pools would be set up by the community. In due course, it will be increased to reflect the substantial number of pools that have emerged in the Cardano ecosystem since the hard fork. This will spread stake, and so block production, among more pools. The overall goal in choosing the setting of the parameter will be to maximise the number of sustainable pools that the network can support, so creating a balanced ecosystem. In order to achieve this, a careful balance is required between opening up the opportunity to run a block-creating pool to as many pools as want to run the system, against the raw economics of running a pool (from bare metal servers, to cloud services, to people’s time), taking into account the rewards that can be earned from the actively delegated stake. Changing this parameter will therefore be done with a degree of caution and balance so that we ensure the long term success of a fully decentralized Cardano network. We’re now looking carefully at early pool data and doing some further modelling before making the next move.

d and pool rewards

Two questions remain: What is the effect of d on the rewards that a pool can earn, and can this parameter ever be increased?

Regarding rewards, as long as a pool produces at least one block, the value of the parameter has absolutely no effect on the rewards that a pool will earn – only on the number of blocks that are distributed to the pools. So if a pool has exactly 1% of the stake, it will earn precisely 1% of the total rewards, provided that it maintains its expected performance.

d parameters and rewards

Finally, while d could in theory be increased, there would need to be a truly compelling reason to do so (a major protocol issue, or fundamental network security, for example.) We would never envision actually doing this in practice. Why? Simply because we want to smoothly and gradually reduce the parameter to 0 in order to achieve our objective of true decentralization. We’ll be making this journey carefully but with determination step by step. If each step is taken thoughtfully and with confidence, you should not need to retrace them? As d becomes 0, the centralized IO servers will be finally switched off, and Cardano will become a model of decentralized blockchain that others aspire to be.


The decline of centralized entities coincides with Cardano's rise towards full and true decentralization. In the near future, the Cardano blockchain will be solely supported and operated by a strong community of stake pools whose best interest is the health and further development of the network.

This journey, which began with Shelley and the implementation of the d parameter, will take Cardano through a path of evolutionary stages in which the network will become progressively more and more decentralized, as d decays. The journey will only end when the blockchain enters a state of irrevocable decentralization, a moment in time that will see networking, block production, and governance operating in harmony within a single environment.