Setting solid parameter values – while maintaining flexibility for the future – will be key to the growth and ongoing decentralization of Cardano. After consulting with the community, and working closely with my colleagues Kevin Hammond and Alex Appledoorn, we believe we’ve identified a good place to start.
The behavior of Cardano Shelley is controlled by around 20 parameters, and values have to be set for all of those before we launch the mainnet. Most of these parameters are technical in nature, so while setting them correctly is important to guarantee the safety and optimize performance of the system, their particular values do not have a significant influence on user experience.
Some parameters are different, though. They determine the level of centralization and sustainability of the Cardano ecosystem. They also drive the economics of delegation and of operating a stake pool. Choosing good values for these is exceedingly complicated, because we have to carefully balance a number of important considerations; security, performance, stability, sustainability, decentralization, fairness and economic viability.
With all parameters on the Cardano blockchain, we have three distinct goals to keep in mind:
- We want to be truly decentralized, so that no one party can threaten the integrity of the chain
- We want the stake pool operators to be incentivized to keep supporting our chain
- We do not want these incentives to significantly change at any singular point in time in a way that might negatively affect the stability of the operators’ income
We want to give equal opportunity to everyone who wants to participate in Cardano and run a stake pool. However, parameter values that might seem fair and reasonable for smaller pools can become challenging for larger pools and vice versa. For example, large pools could find it easy to put down a higher pledge than small pools can afford. Small pools, on the other hand, might be able to operate with far lower costs than larger pools.
We also consider it imprudent to change parameters too frequently, because this might negatively affect the stability and predictability of the operators’ income. Taking all of this under consideration we came up with some recommendations for initial choices of parameter values which we will outline here.
However, we do not want to stop there. With decentralization comes democracy. Our community must have a say in how the chain is governed. For this reason, we will run with these numbers initially and issue a Cardano improvement proposal, where the community can vote on optimal chain parameters. In the end, the governance of Cardano will be in the hands of the Cardano community, who we feel confident are the best people to advise us.
Desired number of stake pools
The desired number of stake pools k is an important parameter. Cardano incentives have been designed to encourage an equilibrium with k fully saturated pools, which means that rewards will be optimal for everybody when all stake is delegated uniformly to the k most attractive pools.
The higher k chosen, the more decentralized the system becomes. But a higher k also leads to a less efficient system (higher costs, more energy consumption) and lower rewards for both delegators and stake pool owners. Based on what we have learned from both the Incentivized Testnet (ITN) and the Haskell Shelley testnet, we know that our community is highly motivated to set up pools and support the chain with hundreds within a matter of weeks.
This tells us that some measure of decentralization can – and will – happen relatively quickly. But decentralization alone is not enough. Cardano needs a long term commitment from its operators, and conversely, operators need to be sufficiently incentivized to keep supporting the system.
To strike a balance between decentralization and these incentives for stake pool operators, we are proposing an initial k=150 and then to gradually increase that value. We believe this will ensure that the system is stable and efficient in the beginning, and can gradually grow over time to become more decentralized (and even more secure) later on:
The number of 150 stake pools of roughly equal size makes Cardano an order of magnitude more decentralized than any other blockchain. And this is only the beginning. There is no reason why there could not be thousands of stake pools in the future.
Staking rewards for both delegators and stake pool operators are taken from two sources; transaction fees and monetary expansion. Specifically, every epoch, all the transaction fees from every transaction from all blocks produced during that epoch are put into a virtual 'pot'. Additionally, a fixed percentage, ρ, of the remaining ada reserves is added to that pot. Then a certain percentage, τ, of the pot is sent to the treasury, the rest is used as epoch rewards.
This mechanism ensures that in the beginning, when the number of transactions is still relatively low, because users are just starting to build their business on Cardano, the portion of rewards taken from the reserves is high. This provides a great incentive for early adopters to move quickly and benefit from the high initial rewards. Over time, as transaction volume increases, the additional fees compensate for dwindling reserves.
This mechanism also ensures that available rewards are predictable and change gradually. There will be no sudden 'jumps' comparable to bitcoin halving events every four years. Instead, the fixed percentage taken from remaining reserves every epoch guarantees a smooth exponential decline.
So what value should ρ have? And how much should go to the treasury? This is again a trade off: higher values of ρ mean higher rewards for everybody initially and a treasury that fills faster. But higher values of ρ also mean faster reserve depletion. It is certainly important, especially in the beginning, to pay high rewards and incentivize early adopters. But it is also important to provide a long term perspective for all stakeholders.
As explained above, Cardano will never run out of reserves; look instead at an exponential decay. To get a feeling for the impact of a specific value of ρ, one can calculate the 'reserve half life', the time it takes for half of the reserve to have been used up.
After much deliberation, we arrived at a suggestion of 0.22% for ρ. When you crunch the numbers, you get around four to five years as 'reserve half life' for this. In other words, every four to five years, half of the remaining reserve will be used. This is close to the 'bitcoin half life' of circa four years, so Cardano reserves will deplete at about the same rate as bitcoin reserves.
It is worth noting here that it took Bitcoin around eight years to reach its peak of maximum adoption and price. We therefore feel that it makes sense to expect Cardano transaction volume and exchange rate to increase sufficiently over the next eight years to more than make up for the decrease of monetary expansion during that time.
From reserves to treasury
We also propose an initial value of 5% for τ, the percentage of rewards automatically going to the treasury every epoch. This means that at least 380,000,000 ada will be sent from the reserves to the treasury over the next 5 years.
However, the real amount going to the treasury will be significantly higher. First of all – again taking learnings from the ITN, but also predicting the use of ada in the future – it’s unreasonable to assume that all ada will be delegated. Some of it will be locked-up in exchanges, be transacted and used in various smart contracts. The ada that is not being delegated will produce unclaimed awards. Those 'unclaimed rewards' also go to the treasury, which will bump the amount to around 1,900,000,000 ada.
Secondly, we do not expect the pledge of most pools to be particularly high, just high enough to make it unattractive to launch a Sybil attack. The difference between potential pool rewards with a very high pledge and pools with the more realistic pledge level we expect goes to the treasury as well and will add an additional 1,000,000,000 ada over the first five years. The sum of all the ada flowing to the treasury means that there will be sufficient funds to pay for new exciting features and extensions for the foreseeable future.
Pledge influence factor & minimum operational cost settings
Ada that is pledged by pool owners provides essential protection against 'Sybil' attacks by ensuring that delegated stake is not excessively attracted to pools whose owners try to attack the system by creating a large number of pools without themselves owning a lot of stake. Myself, Kevin Hammond and Duncan Coutts covered this in some detail recently on the Cardano Effect show.
The pledge influence factor directly affects the rewards that a pool earns: the higher the influence factor, the more of a difference a higher pledge makes on rewards. A higher influence factor increases the level of Sybil protection and makes the system safer and more secure, but it also gives an advantage to stake pool owners that can afford a higher pledge.
Higher pledge can be used to compensate for higher operational costs, meaning that a pool with relatively high costs can maintain suitable rewards and remain attractive to delegators by increasing its pledge. We have tested a variety of pledge influence factors under various real world conditions (about a million simulations in all). The influence factor can range between 0 and infinity. Our chosen initial setting of 0.3 is designed to balance the level of Sybil protection against the required pledge.
There is no minimal pledge, though. Pool operators can set the pledge as low or as high as they like. Rewards are influenced by their choice, but there is no 'hard' rule forcing them to pledge a specific amount. This means that ultimately, pool pledges will be as high as pool owners are willing to make them, and it will be up to our community to find a sweet spot between protection against attacks, economic considerations and the desire for fairness and equal opportunity.
The minimum operational cost setting ensures that the pledge influence factor is effective, by avoiding a 'race to the bottom' where pool owners claim excessively low operating costs in order to gain a competitive advantage. While this might benefit ada stakeholders in the short term, the long-term effect would be to risk the health of the Cardano network by disincentivizing professional pool operation.
Distribution of Typical Pool Operating Costs per pool per year, obtained from a survey of experienced pool operators in May 2020.
Genuine low cost operators can greatly benefit from the minimum operational cost, because the difference between the minimal cost and their actual cost provides them with additional income on top of their margin and their staking rewards. Our research shows that typical operating costs are expected to be in the $2,000-$15,000 range per pool per year, as shown in the diagram above. We have therefore chosen a setting of $2,000 for the minimum operational cost.
Estimated Range of Average Return on Investment (ROI) for Stake Pools assuming a Monetary Expansion Rate of 0.22% per epoch.
Finally, we calculated the expected returns for stake pools under a range of different real world scenarios (about 150,000 pools in total). We used the settings for the influence factor, monetary expansion and minimum cost that were given above and varied the targeted number of pools between 150 and 500. Our results show that given the distribution of costs that we showed in the diagram above, stake pools will achieve sustainable ROIs of between 6%-6.5% on average, using today’s ada to dollar conversion rate. The ROIs would, of course, be even better if the value of Ada were to appreciate.
Choosing good values for all Cardano Shelley parameters is a hard and complicated endeavor, because a lot of concerns have to be balanced – security, efficiency and stability of the system on the one hand versus economic viability for stake pool operators and delegators and long-term sustainability of the ecosystem on the other hand.
No other blockchain has ever done what we are going to do, we are charting new territory with every step and move at the cutting edge of science and technology, so we can’t rely on existing data and statistics or past experience, but have to use educated guesses and mathematical models, which can never be perfect, more often than not.
We did our best to come up with a reasonable proposal, but we know it will have to be improved upon over time. The values proposed here are just a start, and we will closely work with our Community to refine and adjust them over the coming months and years.
Designing and deploying a distributed ledger is a technically challenging task. What is expected of a ledger is the promise of a consistent view to all participants as well as a guarantee of responsiveness to the continuous flow of events that result from their actions. These two properties, sometimes referred to as persistence and liveness, are the hallmark of distributed ledger systems.
Achieving persistence and liveness in a centralized system is a well-studied and fairly straightforward task; unfortunately, the ledger that emerges is precariously brittle because the server that supports the ledger becomes a single point of failure. As a result, hacking the server can lead to the instant violation of both properties. Even if the server is not hacked, the interests of the server’s operators may not align with the continuous assurance of these properties. For this reason, decentralization has been advanced as an essential remedy.
Informally, decentralization refers to a system architecture that calls for many entities to act individually in such a way that the ledger’s properties emerge from the convergence of their actions. In exchange for this increase in complexity, a well-designed system can continue to function even if some parties deviate from proper operation. Moreover, in the case of more significant deviations, even if some disruption is unavoidable, the system should still be capable of returning to normal operation and contain the damage.
How does one design a robust decentralized system? The world is a complicated place and decentralization is not a characteristic that can be hard-coded or demonstrated via testing – the potential configurations that might arise are infinite. To counter this, one must develop models that systematically encompass all the different threats the system may encounter and demonstrate rigorously that the two basic properties of persistence and liveness are upheld.
The strongest arguments for the reliability of a decentralized system combine formal guarantees against a broad portfolio of different classes of failure and attack models. The first important class is that of powerful Byzantine models. In this setting, it should be guaranteed that even if a subset of participants arbitrarily deviate from the rules, the two fundamental properties are retained. The second important class is models of rationality. Here, participants are assumed to be rational utility maximizers and the objective is to show that the ledger properties arise from their efforts to pursue their self interest.
Ouroboros is a decentralized ledger protocol that is analyzed in the context of both Byzantine and rational behavior. What makes the protocol unique is the combination of the following design elements.
- It uses stake as the fundamental resource to identify the participants’ leverage in the system. No physical resource is wasted in the process of ledger maintenance, which is shown to be robust despite ‘costless simulation’ and ‘nothing at stake’ attacks that were previously thought to be fundamental barriers to stake-based ledgers. This makes Ouroboros distinctly more appealing than proof-of-work protocols, which require prodigious energy expenditure to maintain consensus.
- It is proven to be resilient even if arbitrarily large subsets of participants, in terms of stake, abstain from ledger maintenance. This guarantee of dynamic availability ensures liveness even under arbitrary, and unpredictable, levels of engagement. At the same time, of those participants who are active, barely more than half need to follow the protocol – the rest can arbitrarily deviate; in fact, even temporary spikes above the 50% threshold can be tolerated. Thus Ouroboros is distinctly more resilient and adaptable than classical Byzantine fault tolerance protocols (as well as all their modern adaptations), which have to predict with relative certainty the level of expected participation and may stop operating when the prediction is false.
- The process of joining and participating in the protocol execution is trustless in the sense that it does not require the availability of any special shared resource such as a recent checkpoint or a common clock. Engaging in the protocol requires merely the public genesis block of the chain, and access to the network. This makes Ouroboros free of the trust assumptions common in other consensus protocols whose security collapses when trusted shared resources are subverted or unavailable.
- Ouroboros incorporates a reward-sharing mechanism to incentivize participants to organize themselves in operational nodes, known as stake pools, that can offer a good quality of service independently of how stake is distributed among the user population. In this way, all stakeholders contribute to the system’s operation – ensuring robustness and democratic representation – while the cost of ledger maintenance is efficiently distributed across the user population. At the same time, the mechanism comes with countermeasures that de-incentivize centralization. This makes Ouroboros fundamentally more inclusive and decentralized compared with other protocols that either end up with just a handful of actors responsible for ledger maintenance or provide no incentives to stakeholders to participate and offer a good quality of service.
These design elements of Ouroboros are not supposed to be self-evident appeals to the common sense of the protocol user. Instead, they were delivered with meticulous documentation in papers that have undergone peer review and appeared in top-tier conferences and publications in the area of cybersecurity and cryptography. Indeed, it is fair to say that no other consensus research effort is represented so comprehensively in these circles. Each paper is explicit about the specific type of model that is used to analyze the protocol and the results derived are laid out in concrete terms. The papers are open-access, patent-free, and include all technical details to allow anyone, with the relevant technical expertise, to convince themselves of the veracity of the claims made about performance, security, and functionality.
Building an inclusive, fair and resilient infrastructure for financial and social applications on a global scale is the grand challenge of information technology today. Ouroboros contributes, not just as a protocol with unique characteristics, but also in presenting a design methodology that highlights first principles, careful modeling and rigorous analysis. Its modular and adaptable architecture also lends itself to continuous improvement, adaptation and enrichment with additional elements (such as parallelization to improve scalability or zero-knowledge proofs to improve privacy, to name two examples), which is a befitting characteristic to meet the ever-evolving needs and complexities of the real world.
To delve deeper into the Ouroboros protocol, from its inception to recent new features, follow these links:
- Ouroboros (Classic): the first provably secure proof-of-stake blockchain protocol.
- Ouroboros Praos: removes the need for a rigid round structure and improves resilience against ‘adaptive’ attackers.
- Ouroboros Genesis: how to avoid the need for a recent checkpoint and prove the protocol is secure under dynamic availability for trustless joining and participating.
- Ouroboros Chronos: removes the need for a common clock.
- Reward sharing schemes for stake pools.
- Account management and maximizing participation in stake pools.
- Optimizing transaction throughput with proof-of-stake protocols.
- Fast settlement using ledger combiners.
- Ouroboros Crypsinous: a privacy-preserving proof-of-stake protocol.
- Kachina: a unified security model for private smart contracts.
- Hydra: an off-chain scalability architecture for high transaction throughput with low latency, and minimal storage per node.
For exchanges and developer partners, integrating with any blockchain can be challenging. The technology often moves so quickly that keeping up with the pace of change can be unrealistic. Cardano’s development and release process are now driving things forward apace. Managing parallel software development workstreams moving at different speeds can feel a bit like changing the tires on a truck while it’s driving at 60 miles per hour.
Cardano’s vision is to provide unparalleled security and sustainability to decentralized applications, systems, and societies. It has been created to be the most technologically advanced and environmentally sustainable blockchain platform, offering a secure, transparent, and scalable template for how we work, interact, and create, as individuals, businesses, and societies.
In line with these ambitions, we needed to devise a way that our partners could swiftly, easily and reliably integrate with Cardano, regardless of what was going on under the hood. Whatever the pace and cadence of future rollouts, we wanted to develop a consistent method by which all updates to the core node could be easily adopted by everyone.
In order to make that integration and interaction with Cardano easier and faster, IOHK engineers formed the Adrestia team, to take responsibility for building all the web APIs and libraries that make Cardano accessible to developers and application builders. Developments to the node can then focus on performance and scalability, while users will always be able to interact with it effortlessly. The name Adrestia was chosen after the goddess of revolt because with these new interfaces we expect everyone to be able to integrate with Cardano, creating a ‘revolution’ in accessibility.
Enabling developers to keep pace with change
The goal of the Adrestia team is to provide – via Web APIs – a consistent integration experience so that developers can know what to expect between Cardano roadmap releases. Whether they are a wallet developer or an exchange, users can flexibly explore the chain, make transactions and more.
The APIs are as follows:
- cardano-wallet: HTTP ReST API for managing UTXOs, and much more.
- cardano-submit-api: HTTP API for submitting signed transactions.
- cardano-graphql: HTTP GraphQL API for exploring the blockchain.
The SDK consists of several low-level libraries:
- cardano-addresses: Address generation, derivation & mnemonic manipulation.
- cardano-coin-selection: Algorithms for coin selection and fee balancing.
- cardano-transactions: Utilities for constructing and signing transactions.
- bech32: Haskell implementation of the Bech32 address format (BIP 0173).
In addition to providing a flexible and productive way to integrate with Cardano, maintenance is also made easier. With consistency, it can often require less time to update integrations between releases. This familiarity reduces maintenance costs. New software can then deploy in days rather than weeks. Ultimately, anyone can keep pace with change.
The results are now live in the Byron era of Cardano. Exchanges or third-party wallets using Cardano-SL, should now be integrating to prepare for the new Byron and upgrading to Shelley wallet. These need to happen consecutively to avoid any outages. Full details have been added to the Adrestia team repo and we continue to work with our partners to ensure there is no interruption in service for ada holders keeping their funds on exchanges or in third-party wallets. The chart below shows the difference between the Cardano-SL node and the upcoming Shelley node. The components in red are non-Shelley compatible and will break after the hard fork, while the other components are Shelley compatible and will be supported during and after the hard fork.
Consistency is key in creating a blockchain network that works for everyone. Cardano is not being built for the next five or ten years, but for the next fifty. Change to the system is inevitable in that time but Adrestia was made to ensure that everyone can connect with the Cardano node. To get started, check out the Adrestia project repo and read the user guide.
A community's overall success largely depends on its ability to collaborate. How its members interact with each other, how they share knowledge to find new avenues of development, and how open they are to embrace innovation and novel technologies will determine the community's long-term viability.
One of IOHK’s founding principles is its belief in nurturing a collaborative ecosystem for blockchain development. Our commitment to knowledge-sharing and to our deeply-held principles of open-source serves as the rationale behind becoming a member of the Hyperledger community.
IOHK understands and fosters the principles of building its own blockchain technologies, while simultaneously partnering with other enterprises - all driven by the shared goal of driving forward blockchain adoption. We want to share what we have learned, build partnerships, and combine expertise to solve enterprise challenges across multiple industries for the benefit of all.
Becoming part of the Hyperledger Community: What it means for IOHK
Blockchain is a rapidly developing technology. But its sheer versatility has demonstrated an unparalleled potential for innovation across multiple sectors. We are now entering the next phase of maturity with the creation of enterprise ecosystems to support novel projects that leverage decentralization.
Hyperledger focuses on developing a suite of robust frameworks, tools, and libraries for enterprise-grade blockchain deployments, serving as a collaborative platform for various distributed ledger frameworks.
IOHK is joining Hyperledger to contribute to its growing suite of blockchain projects while employing the Hyperledger umbrella to provide visibility and share components of IOHK's interoperable framework. Through this collaborative effort, we want to foster and accelerate collective innovation and industrial adoption of blockchain technologies.
Why we are joining Hyperledger
Hyperledger includes more than 250 member companies, including many industry leaders in finance, banking, technology, and others. Becoming a part of this consortium enables us to do two things: (1) to collaborate and build synergies with others in this community, and (2) to refine our products for further interoperability. By doing so, we hope to encourage the co-development of our protocols and products, in line with our open-source development philosophy.
We are keen to gain a deeper understanding of specific enterprise requirements from all these projects, so we can construct better products and attune our offering more accurately. We want to help shape the future of the enterprise blockchain ecosystem by leveraging blockchain technology for a more decentralized economy - both through the products and solutions we engineer, and everything that this framework represents. We believe that this collaborative spirit will drive innovation and adoption of blockchain over the next decade, and beyond.
Smart contracts use the GHCJS cross-compiler to translate off-chain code
4 June 2020 Luite Stegeman 8 mins read
This is the second of the Developer Deep Dive technical posts from our Haskell team. This occasional series offers a candid glimpse into the core elements of the Cardano platform and protocols, and gives insights into the engineering choices that have been made. Here, we outline some of the work that has been going on to improve the libraries and developer tools for Plutus, Cardano’s smart contract platform.
At IOHK we are developing the Plutus smart contract platform for the Cardano blockchain. A Plutus contract is a Haskell program that is partly compiled to on-chain Plutus Core code and partly to off-chain code. On-chain code is run by Cardano network nodes using the interpreter for Plutus Core, the smart contract language embedded in the Cardano ledger. This is how the network verifies transactions. Off-chain code is for tasks such as setting up the contract and for user interaction. It runs inside the wallet of each user of the contract, on a node.js runtime.
In the past year, we haven't made many changes in the GHCJS code generator. Instead, we did some restructuring to make compiling things with GHCJS more reliable and predictable as well as adding support for Windows and making use of the most recent Cabal features. This post gives an overview of what has happened, and a brief look at what's in store for this year.
When installing a package with GHCJS, you probably use the
--ghcjs command line flag or include
compiler: ghcjs in your configuration file. This activates the
ghcjs compiler flavor in Cabal. The
ghcjs flavor is based on the
Cabal has introduced many features in recent years, including support for backpack, Nix-style local builds, multiple (named) libraries per package, and per-component build plans. Unfortunately, the new features resulted in many changes to the code base, and maintenance for the
ghcjs flavor fell behind for some time. We have brought GHCJS support up to date again in version 3.0. If you want to use the new-style build features, make sure that you use cabal-install version 3 or later.
The differences between the
ghc compiler flavors are minor, and cross-compilation support in Cabal has been improving. Therefore, we hope that eventually we will be able to drop the
ghcjs compiler flavor altogether. The extensions would instead be added as platform-specific behaviour in the
GHC allows the compiler to be extended with plug-ins, which can change aspects of the compilation pipeline. For example, plug-ins can introduce new optimization features or extend the typechecker.
Unlike Template Haskell, which is separated from the compiler through the Quasi typeclass abstraction, plug-ins can directly use the whole GHC API. This makes the ‘external interpreter’ approach that GHCJS introduced for running Template Haskell in a cross-compiler unsuitable for compiler plug-ins. Instead, plug-ins need to be built for the build platform (that runs GHC).
In 2016, GHCJS introduced experimental support for compiler plug-ins. This relied on looking up the plug-in in the GHCJS package database and then trying to find a close match for the plug-in package and module in the GHC package database. We have now added a new flag to point GHCJS to packages runnable on the build system. This makes plug-ins usable again with new-style builds and other ‘exotic’ package database configurations.
In principle, our new flag can make plug-ins work on any GHC cross-compiler, but the requirement to also build the plug-in with the cross-compiler is quite ugly. We are working on removing this requirement followed by merging plug-in support for cross-compilers into upstream GHC (see ticket 14335 and ticket 17957).
Long, long ago, GHCJS worked on Windows. One or two brave souls might have actually used it! Its boot packages (the packages built by
ghcjs-boot) would include the Win32 package on the Windows build platform. The reason for this was the Cabal configuration with GHCJS. Cabal’s
os(win32) flag would be set if the build platform was Windows. At the time it was easiest to just patch the package to build without errors with GHCJS, and include it in the boot packages. However, the
Win32 package didn't really work, and keeping it up to date was a maintenance burden. At some point it fell behind and GHCJS didn't work on Windows any more.
The boot packages having to include
Win32 on Windows was indicative of poor separation between the build platform (which runs the compiler) and the host platform (which runs the executable produced by the compiler). This was caused by the lack of a complete C toolchain for GHCJS. Many packages don't just have Haskell code, but also have files produced by a C toolchain, for example via an Autotools
configure script or
The GHCJS approach was to include some pre-generated files, and use the build platform C toolchain (the same that GHC would use) for everything else, hoping that it wouldn't break. If it did break, we would patch the package.
In recent years, the web browser as a compilation target has steadily been gaining more traction. Emscripten has been providing a C toolchain for many years, and has recently switched from its own compiler backend to the Clang compiler with the standard LLVM backend.
Clang has been supported by GHC as a C toolchain for a while. It can output asm.js and WebAssembly code that can run directly in the browser. Unfortunately, users of the compiler cannot yet directly interact with compiled C code through the C FFI (
foreign import ccall) in GHCJS. But having a C toolchain allows packages that depend on
configure scripts or
hsc2hs to compile much more reliably. This fixes some long-standing build problems and allows us to support Windows again. We thought this is already worth the additional dependency.
A variant of GHCJS 8.6 using the Emscripten toolchain is available in the
ghc-8.6-emscripten branch, which can be installed on Windows. This time around, the set of boot packages is the same on every build platform. Emscripten is planned to be the standard toolchain in GHCJS 8.8 onwards.
The downside of this approach was that the build platform very much affected the generated code. If you build on a 64-bit Linux machine, all platform constants would come from the Linux platform. And the code would be built with the assumption that
Later, we switched to using
ghc as a library, introducing
Hooks to change the compilation pipeline where needed. This made it possible to make the GHCJS platform word size independent of the build platform, introduce the
Unfortunately, it turned out to be hard to keep up with changes in the upstream
ghc library. In addition, modifying the existing
Hooks encouraged engineers to work around issues instead of directly fixing them upstream.
In early 2018, we decided to build a custom
ghc library for GHCJS, installed as
ghc-api-ghcjs, allowing us to work around serious issues before they were merged upstream. Recently, we dropped the separate library, and built both the GHC and GHCJS source code in one library:
Although we cannot build GHCJS with the GHC build system yet, we are using the upstream GHC source tree much more directly again. Are we going back to the past? Perhaps, but this time we have our own platform with a toolchain and build tools, avoiding the pitfalls that made this approach so problematic the first time.
24 September 2020
16 September 2020
10 September 2020