Blog > 2021 > February

Bringing Glow to Cardano

We just spun up a devnet to support Glow, the very latest language Cardano will support. We talked to its creator about building a DSL for DApp development.

26 February 2021 Eric Czuleger 5 mins read

Bringing Glow to Cardano

At the end of 2020, we announced our devnets plan to support the longer-term strategic goal of opening up Cardano to multiple development languages – as outlined in the ‘‘Island, Ocean, Pond’ video. This week, building on the Ethereum Virtual Machine, we’re rolling out a new development environment to support the Glow language.

François-René Rideau of Mutual Knowledge Systems is the creator of Glow, a DSL that will allow anyone to write verifiable DApps from a single spec and deploy it on our EVM network. We caught up with Rideau (also known as Fare) to hear more about his vision for GLOW and the Cardano journey so far.

We first introduced the community to GLOW and MuKn at the end of last year when we announced our devnets approach –  but maybe you can remind us how you began working with IOHK?

I began my career proving the correctness of a centralized payment protocol but soon, I wanted to move on. I’ve been involved in crypto since 2014. Eventually I found Cardano and I realized how much I like the community. We have a similar focus on doing things the right way. That is why I wanted to port my domain specific language to GLOW to Cardano.

Tell us a bit why you started your company Mutual Knowledge Systems, or as you call it MuKn (Moon)?

Almost three years ago I was reviewing whitepapers. I understood techniques for some and economics for the others. Some, I understood a little bit of the economics and none of the techniques. By reading white papers which didn’t solve the underlying problems. I realized I could do better. So, I started designing a scaling solution. 

A friend suggested that I work on scaling for smart contracts. At first, I tried to make a company around scaling but soon we discovered that language and logic were crucial for everyone in the decentralized space. Now, we have a company called Mutual Knowledge Systems which is built around our programming language GLOW. In essence, GLOW is a much better way to write applications than existing languages.

When you say ‘better’, what do you really mean?

Writing a DApp is the single hardest thing to do in the world. This is because you can’t afford a mistake. Any error means a significant loss of user funds. On top of that, the tools didn’t exist to create the most secure DApps. So, we decided to make those tools.

When you make a DApp you are not just fighting random errors, you are fighting active adversaries. An attacker will always attempt to make bad things happen in your ecosystem, it can be very profitable for them. The military can guard their hardware infrastructure and make sure that software is secret. Developers in the blockchain space don’t have that luxury. With a DApp, part of it must be public. That means you can’t hide every bug or exploit. 

I believe that to write a program, you should use a domain specific language and a formal set of tools and techniques. The power of simplicity and abstraction allows us to do all the reasoning necessary with less attack surface. It is harder to check a million lines of code for a bug, but if you have a 1000 lines, then you can make sure it remains safe.

What is it about Cardano and its community that appeals?

I started like everyone else, on Ethereum. When I met the Cardano community I felt we thought in the same way. We want to do things that are correct and things that work. We think in the long term, not about if it just works for today. We want to build on stone and not quicksand. At times, this can be frustrating because things go slow, but I am happy with the attention to detail and quality in the development of Cardano. Is it perfect? No, it’s not. But it’s got great fundamentals. 

Can you talk about how you hope Glow will change the DApp developer experience?

Glow is portable. Now it works on Cardano and Ethereum but in the future it will work with any blockchain that is sufficiently advanced. That means that you can run your app once and you’ll never have to worry about it working on any other platform. So, developers will run their application on the blockchain that works the best and those that work best will shine on their own merits.This makes blockchains compete to bring a solid value proposition.

What can the community expect from GLOW?

We are now launching this early version of GLOW built on the EVM. We already have something to show. It is not production ready, but we can demonstrate simple applications. Users can also see how they can write a 20 line application which performs along the same lines as a hundred line application. So, while we’re not ready for the full launch yet, I think we have something exciting to show.

We’re rolling out the integration with Glow with our EVM and devnet program, so what are some of the benefits of this?

GLOW can be used to target any smart contract in the EVM network. That means that Cardano can run any smart contracts written with GLOW on the sidechain.

What is the rollout process like and how can our community get involved if they want to?

Glow is still in development. There are some things that it can do and some it can’t. We invite anyone to join the Glow community where we are actively adding features. If you have a great project maybe we can prioritize features that you need.

If you’re a developer, we encourage you to get involved with Mutual Knowledge Systems and Glow. See our full conversation with François-René Rideau and a demonstration of Glow during Cardano360.

Babel fees - denominating transaction costs in native tokens

Introducing a novel mechanism that allows the payment of transaction fees in user-defined tokens on Cardano

25 February 2021 Prof Aggelos Kiayias 8 mins read

Babel fees - denominating transaction costs in native tokens

In Douglas Adams' classic The Hitchhiker's Guide to the Galaxy, a Babel fish is a creature that allows you to hear any language translated into your own. This fantasy of universal translation ensures meaningful interaction despite the myriad different languages in the galaxy. 

In the cryptocurrency space, smart contract platforms enable the development of a myriad custom tokens. Is it possible to interact with the platform using your preferred token? If only there was a “Babel fees” mechanism to translate the token you use to the one that the platform requires for posting a transaction. 

Common wisdom in blockchain systems suggests that posting a valid transaction must incur a cost to the sender. The argument is that, without such constraint, there is nothing to stop anyone from overloading the system with trivial transactions saturating its capacity and rendering it unusable. Given the above tenet, a frequently made corollary is that in any blockchain system where user-defined tokens are supported, it should be prohibited to pay transaction fees in such tokens. Instead, transactions should carry a fee in the native token of the platform that is accepted by all participants as being valuable.  Arguably such a restriction is undesirable. But how is it possible to circumvent the ensuing – and seemingly inevitable – vulnerability? 

The art of the possible

Cryptography and game theory have been known to make possible what seemed impossible. Celebrated examples include key exchange over a public channel, Merkle's puzzles, and auctions where being truthful is the rational thing to do, like Vickrey's auctions. And so it also turns out in this case. 

First, let us recall how native assets work in Cardano: Tokens can be created according to a minting policy and they are treated natively in the ledger along with ada. Cardano's ledger adopts the Extended UTXO (EUTXO) model, and issuing a valid transaction requires consuming one or more UTXOs. A UTXO in Cardano may carry not just ada but in fact a token bundle that can contain multiple different tokens, both fungible and non-fungible. In this way it is possible to write transactions that transfer multiple different tokens with a single UTXO. 

Transaction fees in the ledger are denominated in ada according to a function fixed as a ledger parameter. A powerful feature of Cardano's EUTXO model is that the fees required for a valid transaction can be predicted precisely prior to posting it. This is a unique feature that is not enjoyed by other ledger arrangements (such as the account-based model used in Ethereum). Indeed, in this latter case the fees needed for a transaction may change during the time it takes for the transaction to settle, since other transactions may affect the ledger's state in between and influence the required cost for processing the transaction. 

A thought experiment

Let's consider the following thought experiment to help us move closer towards our objective of Babel fees. Imagine that it is possible to issue a transaction that declares a liability denominated in ada equal to the amount of fees that the transaction issuer is supposed to pay. Such a transaction would not be admissible to the ledger. However it can be perceived as an open offer that asks for the liability to be covered. Why would anyone respond to such an offer? To entice a response, assuming the token bundle concept already present in Cardano,  the transaction can offer some amount of token(s) to whoever covers the liability. This suggests a spot trade between ada and the offered token(s) at a certain exchange rate. Consider now a block producer that sees such a transaction. The block producer can create a matching transaction absorbing the liability covering it with ada as well as claiming the tokens that are on offer. 

By suitably extending the ledger rules, the transaction with the liability as well as its matching transaction become admissible to the ledger as a group. Due to the absorption of the liability, the set of two transactions becomes properly priced in ada as a whole and hence it does not break the ledgers' bookkeeping rules in terms of ada fees. As a result, the transaction with the liability settles, and we have achieved our objective. Users can submit transactions priced in any token(s) they possess and, providing a block producer is willing to take them up on the spot trade, have them settle in the ledger as regular transactions!

A concrete example

The mechanism is of course conditioned on the presence of liquidity providers that possess ada and are willing to issue matching transactions. In fact the mechanism creates a market for such liquidity providers. For instance, a stake pool operator (SPO), can publish exchange rates for specific tokens they consider acceptable. For instance an SPO can declare that they will accept tokenX for an exchange rate 3:1 over ada. It follows that if a transaction costs, say ₳0.16, the transaction can declare a liability of ₳0.16 as well as offer 0.48 of tokenX. In the native asset model of Cardano this can be implemented as a single UTXO carrying a token bundle with the following specification (Ada→ -0.16, tokenX→0.48). Note the negative sign signifying the liability. 

Suppose now that the SPO is about to produce a block. She recovers the liability transaction from the mempool and issues a matching transaction consuming the UTXO with the liability. The matching transaction transfers 0.48 of tokenX to a new output which is owned by the SPO. The resulting block contains the two transactions in sequence. The matching transaction provides the missing ₳0.16 in addition to the fees that are needed for itself. In fact multiple transactions can be batched together and have their fees covered by a single matching transaction. 

Figure. Alice sends a quantity of 9 tokens of type X to Bob with the assistance of Stacy, an SPO, who covers Alice's transaction liability and receives tokens of type X in exchange. The implied exchange rate between X and Ada is 3:1. 

New measures of value

The above process is entirely opt-in for SPOs. Each one can determine their own policy and exchange rate as well as decide to change the exchange rate for the various tokens they accept on the spot. Moreover, there is no need for agreement between SPOs about the value of a specific token. In fact, different SPOs may provide different exchange rates for the same token and a user issuing a liability transaction can offer an amount of tokens corresponding to the minimum, average or even maximum of the posted exchange rates in the network. In this way, a natural trade off arises between settlement time of liability transactions and the market value of tokens they offer. 

This illustrates how native assets, the EUTXO model, and the simple but powerful tweak of introducing liabilities in the form of negative values in token bundles can accommodate Babel fees empowering users to price transactions in any token supported natively by the system. It also shows the unique advantage of being an SPO in such a system. It should be noted that SPOs need not be the only entities in the network offering to cover liabilities. In fact, an SPO can readily partner -if they wish- with an external liquidity provider who will be issuing the matching transactions. In addition, third party providers can also act on the network independently and issue matching transactions. Nevertheless, the benefit will remain with the block producers; SPOs can always front-run matching transactions and substitute them for their own if they wish so. This is a case that front-running transactions is a feature: it makes it feasible for SPOs to be paid in the tokens they prefer for their transaction processing services.

The mechanism of negative quantities in token bundles can be implemented in the basic ledger rules of Cardano at some point following the introduction of native assets with the Mary Hard Fork. Beyond Babel fees, the mechanism allows a variety of other interesting applications, such as atomic swaps for spot trades, that we will cover in a future blog post. It is yet another illustration of the power of Cardano's approach and its ability to support a diverse and entrepreneurial community of users and stake pool operators. 

I am grateful to Manuel Chakravarty, Michael Peyton Jones, Nikos Karagiannidis, Chad Nester and Polina Vinogradova for helpful discussions, suggestions and comments related to the concept of Babel fees and its implementation in the Cardano ledger. We also have a video whiteboard walkthrough covering this topic.

Building native tokens on Cardano for pleasure and profit

New capabilities will allow users to choose simple and powerful tools to bring their assets to life on Cardano

18 February 2021 Tim Harrison 9 mins read

Building native tokens on Cardano for pleasure and profit

With the ‘Mary’ protocol upgrade, which will be implemented using our hard fork combinator technology, native tokens and multi-asset capability are coming to Cardano.

On February 3, we upgraded the Cardano public testnet to ‘Mary’ for final testing. We plan to deploy the Cardano update proposal to mainnet on February 24, which would therefore deploy ahead of the boundary of epoch 250 and take effect on March 1. If we need a few more days of testing, we'll deploy ‘Mary’ the following epoch instead, which will take a five-day period required for updates to take effect. ‘Mary’ has been successfully running on our testing environments for several weeks, so our confidence level remains high. As always, however, we’ll follow a strict process (developed and honed over the previous Shelley and Allegra HFC events) to get this right.

Once the code is successfully deployed to mainnet, we’ll release a new Daedalus Flight version for user testing, which will be our first Cardano wallet with integrated multi-asset capability. Once we are happy with wallet performance and usability, we’ll deliver the Daedalus mainnet release bringing the full-fat native token experience to every Cardano user.

Why native tokens?

Native tokens will bring multi-asset support to Cardano, allowing users to create uniquely defined (custom) tokens and carry out transactions with them directly on the Cardano blockchain.

The use of tokens for financial operations is becoming ever more popular. It can cut costs at the same time as improving transparency, enhancing liquidity, and, of course, being independent of centralized entities such as big banks. Tokenization is the process of representing real assets (eg, fiat currencies, stocks, precious metals, and property) in a digital form, which can be used to create financial instruments for commercial activities.

Cardano will provide many tokenization options. With the ‘Mary’ upgrade, the ledger’s accounting infrastructure will process not only ada transactions but also transactions that simultaneously carry several asset types. Native support grants distinct advantages for developers as there is no need to create smart contracts to handle custom token creation or transactions. This means that the accounting ledger will track the ownership and transfer of assets instead, removing extra complexity and potential for manual errors, while ensuring significant cost efficiency.

Future and utility

Developers, businesses, and applications can create general purpose (fungible) or specialized (non-fungible) tokens to achieve commercial or business objectives. These might include the creation of custom payment tokens or rewards for decentralized applications; stablecoins pegged to other currencies; or unique assets that represent intellectual property. All these assets can then be traded, exchanged, or used as payment for products or services.

Unlike ERC-20 tokens that are based on Ethereum smart contracts, the tracking and accounting of custom tokens on Cardano is supported by the ledger natively. Because native tokens do not require smart contracts to transfer their value, users will be able to send, receive, and burn their tokens without paying the transaction fees required for a smart contract or adding event-handling logic to track transactions.

Working with native tokens on Cardano

In creating an environment for native tokens, we have focused on simplicity of working, affordability, and, of course, security.

Depending on their preferences and technical expertise, users will be able to choose from three ways to create, distribute, exchange and store tokens:

  • Cardano command-line interface (CLI). Advanced users can currently access the CLI via a dedicated testing environment. We will deploy the CLI on the mainnet when we hard fork.
  • A ‘token builder’ graphical user interface (GUI). This will follow the native token CLI launch, providing an easier way for creating tokens.
  • The Daedalus wallet. Daedalus will provide support for sending and receiving custom-created tokens. Daedalus Flight will test native token functionality in March, which will be shortly followed by the mainnet release.

Let’s dig down a little into each option.

Working with Cardano CLI

Advanced developers can use the native tokens testing environment to create (mint) assets and send test transactions to different addresses.

The nature of working with the CLI assumes that someone is familiar with setting up and operating the Cardano node, and has experience in working with transactions and managing addresses and values. To create native tokens using Cardano CLI, one would need to:

  • Set up and start the Cardano node
  • Configure a relay node to connect to the native tokens testing environment
  • Start interaction with the network (prompt Cardano CLI)
  • Construct a monetary policy script
  • Create tokens using the monetary policy script
  • Finally, submit and sign transactions to transfer tokens between addresses.

Native token tutorials and exercises are available on our developer site to help developers mint tokens, create monetary policies, and learn how to execute multi-asset transactions.

We are already seeing particular interest from stake pool operators for this. So far, hundreds of test tokens have been created, and we continue to improve the CLI based on feedback. We welcome your comments and encourage community testing.

Token builder: a user-friendly GUI for token creation

The CLI requires a certain level of development prowess. So we have devised other ways for less technically proficient users to create tokens. To achieve this, we plan to launch a token builder after the mainnet CLI launch.

The token builder is a graphical user interface that makes token creation easier. If you’re interested in creating tokens for your decentralized application, wish to tokenize your property, create NFT collector cards represented as specialized assets, or want to create a stablecoin pegged to the value of other currencies, the token builder can help with that.

To create a token you would just need to fill in:

  • The token name (eg, Hello World)
  • The token symbol (eg, HEW)
  • The token icon (generated automatically)
  • Amount to create (eg, 1,000)
  • Cardano wallet address (your address to host newly created tokens).

The token builder generates the monetary policy automatically – you won’t need to define it yourself. This streamlines the token creation and simplifies it for a non-technical user.

token builder dashboard

Figure 1. The prototype token builder dashboard

Initially, the token builder will be supporting only fungible token creation (while non-fungible tokens can be created using Cardano CLI). In time, we’ll extend the functionality to allow creating non-fungible tokens and changing the monetary policy according to specific preferences. This means that users will be able to specify the conditions under which tokens are minted (or burned), or who has control over the asset supply, for example.

Finally, when tokens are minted, it will be possible to mint more by clicking the ‘Mint more’ button. This can be done based on the same policy to create more tokens of the same kind, or you can create other tokens that represent different values based on a different policy. For example, you can create more Hello World tokens, or, starting from scratch, you can create 500 ‘test’ tokens that will be used for other purposes (these will have a different minting policy).

The token builder aims to reduce the complexity of token creation and also focuses on the enhancement and visual presentation of functional processes. As an outcome, we aim to provide visibility around all the tokens created, their values, quantity, and addresses between which they are being transferred – all in one place.

Daedalus

Those users who do not wish to create their own tokens but who want to use existing ones for payments, purchases or exchange, will be able to use such wallets as Daedalus, and later Yoroi.

The Daedalus team continues to work on integrating the wallet backend with the user interface to support the native token functionality. Users will be then able to hold native tokens in their wallets, send and receive them as they would do with ada.

Native tokens are uniquely identified by two hexadecimal numbers stored on-chain ‒ the Policy ID and the Asset Name. Considering that these numbers are not 'human-friendly', we have created fingerprints for easier identification of native tokens by users. Fingerprints are 44 character long alphanumeric strings beginning with the prefix 'token'.

Additional token data displayed in the wallet UI (name, description, and acronym) will be provided by the Cardano token registry, administered initially by the Cardano Foundation.

Daedalus native tokens Mary UI

Figure 2. Daedalus native tokens UI

Native token lifecycle

When all the necessary components are deployed, the native token lifecycle will be complete. It consists of five phases:

  • minting
  • issuing
  • using
  • redeeming
  • burning.

Multi asset token life cycle

Figure 3. Native token lifecycle phases

During these phases, asset controllers will be able to define the policy for the asset class and authorize token issuers to mint or burn tokens. Token issuers can then mint tokens (for applications, for instance), maintain their circulation, and issue them to token holders. Finally, token holders (eg, individual users or exchanges) will be able to send tokens to others, use them for payment, or redeem them when they have finished using them.

What’s next?

We launched the testing environment in December 2020, laying the foundation for native token development. We also added a staging environment to enable initial testing by exchanges and stake pool operators. It features a faucet and allows a network of nodes to be built while connecting to the relays.

Follow our Cardano status updates to see our weekly progress. Alongside the core development work, our teams are working on all the supporting documentation and updating it on the developers website. As we expand the capabilities of the native tokens, and add tools and interfaces, we’ll be providing documentation and tutorials to encourage people to get involved. Naturally, the codebase is open source and we have already seen a number of interesting community projects emerge (around digital collectibles, for example).

So a lot will be happening in late February and early March, from final testing and the HFC event, to native tokens on Cardano within a brand new Daedalus wallet experience. Exciting times ahead!

Find out more by joining other community members to discuss native tokens in the Cardano Forum's dedicated native token section. And don't forget to sign up for our devnets program.

Additional technical input by Olga Hryniuk.

Our million-dollar baby: Project Catalyst

The next Catalyst funding round will be our most accessible and ambitious round of funding yet

12 February 2021 Dor Garbash 3 mins read

Our million-dollar baby: Project Catalyst

We launched Project Catalyst six months ago as a series of experiments to advance on-chain governance and accelerate community-driven innovation on Cardano. The project seeks to achieve the highest levels of community collaboration and to seed the best ideas with development funding via a community-moderated process. Community, innovation, funding, value, growth – Catalyst creates powerful synergies, and ultimately a self-sustaining engine of growth for Cardano’s future. 

Each funding round has grown in its scope, level of funding, and community engagement. We already have 7,000 members on the IdeaScale innovation platform with 1,800 active voters. Adoption is growing by 10% every week and we have only just begun. 

Fund4 will be our most accessible and ambitious round yet and our first million dollar round – that’s the size of the ada pot to fund development projects on Cardano. Proposal teams will use these funds to develop tooling, build decentralized applications, launch education and training initiatives for developers, and so much more. Every fresh contribution adds fresh value to the ecosystem. And since the community is at the core of Catalyst, 20% of treasury funds are set aside to reward and incentivize community advisers, referrers and participating voters for their contribution.

Throughout 2021, we will continue to encourage engagement with the project across the Cardano community by making it more accessible. In Fund3, voter registration has been significantly improved. Registration is now fully integrated with the Daedalus wallet, within a new registration center. This replaces a separate user-unfriendly and time-consuming process we had to use in Fund2 for technical reasons, now addressed. For Yoroi light wallet users, a browser extension provides easy registration. Voters will then use a dedicated mobile voting app – downloadable on iOS or Android – to complete the process. In a future Daedalus release, users will ultimately be able to register and vote from the wallet. To participate in voting you need to meet a threshold currently set at 3,000 ada - a threshold set to help protect the voting system from malicious attacks. To get a Cardano wallet, make sure to download Daedalus only from its official site or use the official Yoroi browser extension.

In less than half a year, Project Catalyst has grown to become the world’s largest decentralized autonomous organization (DAO). It is a fulcrum of future development and sustainable innovation, driven by the Cardano community, for the Cardano community. This latest fund is a huge step up for the proposers, advisers, and voters collaborating already. We want to encourage everyone to become part of bringing on-chain governance to Cardano.

If you are an ada holder and you want to influence and contribute to the future direction for Cardano, then bring your ideas and join us at Project Catalyst.

*Please note, due to an editing error, a previous version of this blog erroneously stated that voter registration and voting would be included in the forthcoming Daedalus release. Our apologies for any confusion.

Decentralizing social media: a conversation with Ben Goertzel and Charles Hoskinson

The minds behind SingularityNET and Cardano come together to explore a vision of the future of decentralization, AI, and social media.

5 February 2021 Eric Czuleger 57 mins read

At the end of 2020, we announced our collaboration with SingularityNET, in an exclusive fireside chat between Charles Hoskinson and SingularityNET founder & CEO, Ben Goertzel. 

SingularityNET recently shared further information on the partnership when they announced their exciting Phase Two initiative, which includes a shift from Ethereum to Cardano, to achieve new network functionalities and launching a Stream of New AGI-ADA Tokens.

Last week, Charles and Ben sat down again together in a special SingularityNET podcast. In a wide-ranging discussion, the pair explore decentralized social media, the Cardano collaboration, and how a benevolent general AI technology might help a healthier social discourse.

Here, in this exclusive long read, we have transcribed the whole conversation for you to enjoy and savour.

Ben Goertzel: Alright. Pleasure to be chatting once more Charles. And I thought it'll be amazing to have an on air discussion on the topic that's been in so many people's minds recently, which is the perhaps of critical importance of decentralization for social media and social networks, because this is something we both been thinking about deeply for quite a long time and have been both moving toward action on for quite a long time in our own ways, maybe the AI spin and you with Cardano and blockchain. But now things seem to be coming to a head and the world seems to suddenly be concerned that a few large corporations are programming everyone's brains in bizarre ways. So, yeah, maybe it is cool to start out just by hearing your overview of the topic.

Charles Hoskinson: Yeah, it's an interesting situation. So I'm kind of conflicted. So, I'm a big libertarian and the libertarian guys say, "Hey, let the market decide. So when someone gets de-platformed, we say, "Hey, it's a private company. They can do whatever they want." But the issue is collusion and so the watershed moment for me wasn't the de-platforming of Trump. I said, yeah, okay the guy violated the end user license agreement probably 9 million times. At some point you have to throw the guy out. The issue was the de-platforming of Parler, because that was a very different animal.

So the whole argument was, well, if you don't like Twitter, go compete with it, build your own social network. That's exactly what Parler did. And they had different moderation standards. But then what occurred was that all of Silicon Valley got together and they colluded and they basically jointly decided to completely de-platform Parler. So Amazon took them down, Apple took them down, Google took them down. And if you're put in a market position where 100% of the mobile market and most of the web market is basically blacklisting you and you have no way to be on a cell phone for an average consumer, no way to have a website for an average consumer without going to extraordinary lengths and it's almost like the pirate bay. You have to host servers in Afghanistan or something to escape it. That's very problematic. It feels like a standard oil controlling the shipping prices of oil back in the 19th century.

BG: The appeal to ethics seems so disingenuous, right? It's like you can search Qanon on garbage on Google just fine. So then why is it so unethical for there to be Qanon garbage on Parler as some of the content, right?

The idea that these big tech companies are acting out of a moral necessity to save everyone's lives. I mean, it rings very hollow, right? And I mean, there's no doubt some people in those companies really are thinking that way. But the alignment of these marginal ethical arguments with obvious corporate profit interests as being advanced by explicit collusion among these big players. It makes it hard to take the ethical aspect one hundred percent seriously.

CH: It's almost become like an ethical tautology in a certain respect. They say 'Don't be evil, except for the times you have to be.' It's a crazy, crazy statement where these companies say, well, we're trying to be moral. And I say, 'Okay, but no one elected you. And why are you guys in charge of the totality and curation flow of all information?' I very firmly believe what needs to happen is we need to split the protocols that carry the information from the interfaces that curate that information. And that feels to be a much more natural thing. The problem we have right now is the stack is vertically completely controlled by a company.

So, Google doesn't just curate what you see in the search engine. They also control the underlying engine. And so as a consequence, they can make a decision on pretty much anything and exclude people laterally. And it's the same for the app stores. It's the same for social networks. The level of collusion is very problematic. I mean, you can't tell me that they didn't talk to each other if they all de-platformed someone the same day in the same hour. It'd be one thing if it was a gradual process where maybe Google and two weeks later, Amazon, something like that. But if it's all exactly at the same time, then it means they picked up the phone and they called each other and say, well, we just decided that this is no good for you.

The problem is that decentralization doesn't solve the underlying problem that they're complaining about, which is radicalization. The issue is that the way information is being presented, it's manipulating our cognitive biases. We're creatures of cognitive biases. No matter how smart we are, we have availability bias, and selection bias and confirmation biases. There's hundreds of them and social scientists, psychologists and neuroscientists, they think about these things and quantify them. And if you digitize those biases and you build algorithms to exploit them, then what ends up happening is you create echo chambers. So you create these silos. Each and every one of those silos they are incapable of getting out of it. There's no idea flow between them. So all you do when you decentralize that, if you don't solve that underlying problem is you make the silos more resilient.

BG: I mean there's a problem when you're applying AIs to learn to win in games or video games, which is both a problem and a benefit is that the AI will learn to do what you asked it to do. So if you're asking it to get maximum points in this game, and there's a way to do it by hacking around the rules of the game in some weird way no human would ever think of, the AI will explore various options. And if it's working well, will find some route to achieve the objective function without taking into account whatever implicit constraints you had about what's the artful way to do it.

I think something similar exists with social media companies. They have certain metrics and objectives they're working toward. Often very, systemically internally, right? I mean, they want people to be looking at their site as long as possible, for example, or they want them to be spending as much as possible clicking on ads. And they'll put a lot of human and algorithmic effort into optimizing toward that goal. And then we can't be very surprised that these groups of brilliant people make cool software build systems that are optimizing toward that goal, like via whatever hacks they can find. And those hacks include exploiting human cognitive biases and exploiting dynamics of addiction in the human brain and all sorts of human, emotional patterns. Exploiting human angst and the desperation and existential confusion. I mean the algorithms and the corporate systems will exploit whatever they can to achieve the goals they're given.

And as you say, it's organized so that these corporate organisms, which are now hybrid human and digital computing process organisms. These corporate organisms are almost like a parasite on modern society and they're achieving their own goal pretty effectively. If you took a bird's eye view of human society and where we want to be and where we want to go during the next few years, and maybe leading towards the singularity and creation of AGI and all that. A situation where these corporate human/computer networks orient toward maximizing shareholder value by getting you to buy stuff online and stare at their website as long as possible.

I mean, these sorts of organizations having that much power is not the optimal dynamic for shaping the collective mind, heart and soul of humanity, right? I mean it's pretty far off from where we want to be. You'd imagine that extremism and siloeing and tribalism, which we're seeing online and in real life, I think that's probably the only scratching the surface of the screwed up patterns that are being fostered. That's the surface layer where it's easy to see how screwed up it is. And there's so many other screwed up individual and collective dynamics that are happening. I wouldn't say all caused by this organization of social media in the tech industry, but certainly co-evolving with it and codependent on it.

CH: Well, it's an interesting thing. So I tend to agree with Max Tegmark in this respect where you invent the car first and then you invent the safety belt. With new technology or new processes, there's a lack of wisdom in the safety components of it until after you've suffered the consequences. So, we looked at the oil and gas industry in the 19th century, they started drilling all these wells and only after they started doing that, did we start thinking about environmentalism. And we said, well, maybe it's not such a good idea just to have unrestricted oil well drilling. Maybe we need to think carefully about what this is actually doing to the environment.

Well, the oil of the 21st century is really the attention economy and the data economy. And we have all this surveillance capitalism and we have all these early pioneering firms and they're effectively mining that. And they're creating a social environmental damage by this process, to use an analogy where these algorithms are built and these platforms were built away to exacerbate human nature. So to your point that they didn't cause it, but I'd certainly say that they're exacerbating it and-

BG: I always think of everything in human society from the end game of legacy humanity. Like we're working on creating AGI. If we can create a benevolent AGI, I mean, this is going to make our current problems seem so archaic and silly. Of course, things won't be perfect. There will be new problems we can't imagine there. But this is certainly the biggest threshold event in the history of humanity, perhaps of life on earth. We could be a few decades from that even less. If there's even a decent odds that this singularitarian view is true, I mean then how the collective mind of humanity is shaped is insanely important, right?

Because the first AGI probably isn't going to be just a stupid human, stupid mind in a box, totally separate from human society. The way things are going it's more likely to come out of the interaction of multiple different AI components made by multiple parties, serving useful economic functions in the world at large. If the first AGI, which triggers this singularity is coming out of the whole mess of the tech ecosystem and people using the technology to do useful things, I mean then how messy that mess is, is an extremely important thing. And that right now, the direction does not look like the internet AI tech ecosystem is evolving in a great configuration for spotting a benevolent super AGI, 5, 10 to 20 years from now, right? Maybe some redirection if some of the sub-networks in there, like the ones we're involved with could affect it. Some redirection would be highly beneficial.

CH: Well, the problem with AGI is that that's kind of like the Deus ex Machina situation where you're saying, well, we could solve this problem if we have this insanely powerful tool. And it's like, well, yeah, but maybe we don't actually need a tool that powerful to make meaningful progress towards this problem.

BG: Decentralized social networks you don't need AGI. Absolutely not. You can do a lot with blockchain networks.

CH: Hang on. So I think an AI solution does provide a lot of value, but I look at it more like a cognitive crutch. So if you injure your leg you get on crutches or you walk with a cane or something like that. I recently had a gout attack and for two weeks I was on a cane. So it's kind of funny. We physically think about this, but for the mental stuff, we don't really think we need it. We say, oh, our brains are perfectly well functions. Like no, we're dopamine addicts. We're constantly manipulated by digital devices and we're in a situation where we're not acting rationally or objectively most of the time.

BG: With access to our hardware and software. We can't fix the bugs in the direct way.

CH: So the question is, what would be the simplest possible agent, intelligent agent that could be constructed that could act as a cognitive crutch to alert us if we are being manipulated or our behavior is exhibiting patterns that have been propagandized. That feels like it would be a massive step forward.

BG: Now we're getting it. Some of this stuff that I'm hoping we will be able to build together with a SingularityNET on the Cardano network over the next few years. I mean, if you look at intelligent virtual assistants now like an Alexa or Google assistant, I mean, A: these things are very stupid in many senses, right? I mean, I have a Google Home Max. I used to play music in my house and the system still hasn't realized I never listen to music with vocals during the day. I mean, it doesn't have that metadata there. It hasn't recognized that very simple pattern, so repeatedly throw stuff at me. I won't listen to it. It's not even able to understand extremely simple repeated patterns in human behavior, which would help them make more money, even by showing me more stuff I want to listen to, right?

So these systems are optimized very narrowly to serve certain functions and their functions certainly are not to help us navigate the universe of the internet and media, in a way that's optimal for our own growth and self understanding, achieving our own goals and optimizing the collective intelligence of humanity. Very, far from it. So one could envision a personal assistant that had a bit more general intelligence. So it understood at least a little bit of what we actually want and are doing, but also was not controlled by a mega corporation with the primary goal of making them money, but was controlled by us who were being assisted by the personal assistant, right?

I mean, I don't want the human personal assistant working for me, helping me do things whose main goal is to make some other corporation money, right? I want the human personal assistant working for me whose goal is to help me because I hired them to help me, right?

And we should have digital assistants like that and they're going to be building machine learning models of everything we're doing like a human assistant builds their own biological model of what their employer is doing. And we should be better than the human assistant. We should be able to explicitly inspect what that model is and edit and correct it if we didn't like it and delete that model if we want to, right? So, I mean, we need among other things, we need intelligent virtual agents to help guide our navigation of the whole internet information sphere, which are secure and decentralized and explainable to us. The thing is we can do that without AGI. We can do that with technology we have right now, and this technology can help along the path toward AGI.

CH: Where do we get the training data from? That was the one thing I was thinking about is how do I train an agent like that?

BG: I mean it's going into smartphones that we use all day, right? So the training day that Google and Amazon and so forth are using, where does it come from? It comes from all of us. In principle, you can download most of what Google is basing its training data on you on, but very few of us are doing it. We're not using it, right? So, I mean, clearly you need all the data that you're using to interact with devices and with people all day. I mean you need that data to be in a secure data world that's owned and controlled by you where you're confident it's being managed and secure. Yeah, but we got to get a little deeper. I mean, it's not just interaction use. You'd have to clearly show an example of confirmation bias to an extent that an ML model would be able to understand that. And so how do you do that in an unsupervised way?

BG: We show it all the time, right? And I mean if the AI has a view of a lot of people, I mean, even those of us who are especially clever in some ways and our basic human social, emotional interactions, there's a lot that we do, which is the same.

Emotional interactions. There's a lot that we do, which is the same as a lot of other people are doing, right? Like in how you interact with an employee versus a romantic partner or a friend or someone who's arguing with you. I think the sort of dialogue meta games and the inner dialogue meta games that people are playing, they're within the scope of current advanced neuro AI tech to recognize it's just, that's not what's being focused on. What's being focused on is recognizing subtle patterns and who's going to click on what ad. And I mean, you don't need to tell that to predict who's going to click on what ad in the most concise and effective way. I mean, you don't care. Right?

It's just a principle problem that the tech industry is not currently trying very hard to solve, but yeah, you're right. You focus on the AI part and I focus on the blockchain part. But in reality, I mean, you need them because I guess the other guy's part is harder because we understand how to solve our part. But I mean, you need both of them. I mean, you need the secure, scalable data ecosystem, respecting data sovereignty and you need that to fuel intelligent virtual assistants that really serve the person that they're assisting is the prime directive. Plus this massive scale data analytics that really understands what's going on with each of us in a way that lets it genuinely help us.

Because what is giving a person what they want? Does it mean gratifying their most intense short-term impulse at each moment? Or does it mean giving them what they want in a sort of balance along multiple timescales? Which is at multiple levels of our being, which is what we try to do with our family and our human friends. And AI's, they're laughably far from making an effort to give us what we want and in the more profound sense at the moment.

CH: Right? Well, the reason why I was focusing on the AI part is the biggest part, the blockchain part, the incentives engineering relies very heavily upon the users and the agents inside the system. And so we say, "Okay, how do we incentivize people to supervise and curate data and agents in a way that we get more dialogue and we get a great moderation?" The ideal form would be, if you take clique's that are disjoint and you put them in the system, then idea flow starts occurring between them. And over time they'll converge into kind of a great moderated middle.

So you can take a very extremist person and either the system acts like it has an immune system and it kind of kicks them out or that node over time, moderates. The incentives in the system have to be designed that way. The reason we have so many problems in my view with Facebook and Twitter is that it actually has the opposite incentive. You get more clicks and more interaction with the more polarized people become. So the system is built in a way to polarize people as much as possible and thus divide them as much as possible. Because it's actually boosting revenue.

BG: I think that's an easier problem to solve. Righteous indignation and the glorious feeling being approved by others in your ingroup and jointly indignant of the guys in the next group. This is a really easy emotion to manipulate with people. It's sort of a low hanging fruit. And to an extent these networks implicitly got stuck in manipulating this low-hanging fruit because it was the easiest way to keep people staring at their app. I mean, just as the internet settled on porn with love, it's been with. Because that's a really low algorithmic complexity way to keep people staring at something, is to show them naked bodies. So, if something would give greater benefit and even get people to start their site longer in the long run, but it isn't quite as simple of a problem, it sort of gets bypassed in the loop of trying to incrementally achieve these metrics more and more each month.

And what's interesting is that the thought that rearranging sort of the configuration of the tech stack as you suggest in the beginning of the conversation, so like rearranging the tech stack so that the protocols are separate from the applications and then the AI models and tools used to create the AI models and inspect the AI models, they're also separate from the applications. I mean, reorganizing things in this way, then it sort of opens up the dynamics of the whole ecosystem in a way that I believe has decent odds of leading to the evolution of social media tools that they give people what they want in a more profound sense. And in doing so, they're creating communication networks among people that are not focused entirely on sort of immediate gratification of the ego and soaking of inter tribal rivalries and so forth.

Because all these good and beautiful things we're alluding to, exist on the internet right now. They exist on the internet right now. There's love, there's compassion, there's true connection between people with rival political views or from different historical tribes and so forth. It's not that we're not capable of that or that it isn't there. People are capable of amazing deep connections with other people and have incredible self-awareness and uplifting of their own consciousness. It's just, you need networks that foster this rather than trying to squash it and channel you into tribalism and immediate ego gratification. And of course neither you or I nor our teams are going to build all the systems that solve this problem. So you're going to create the ecosystem and tool set in which the solutions are going to emerge.

CH: Right. Well, that's the point of incentives engineering is that it's the initial push. And because you don't have friction to slow you down, you tend to accelerate and eventually you get to a great place. I mean, Bitcoin obviously got their incentives engineering right. And they went from a single miner to warehouses of miners all around the world. And now this colossal system. We can argue about the power consumption, but that model was quite competitive to a point that it created a trillion dollar ecosystem. So I often think, "Well, what incentives do we need?" And we kind of have three sets of distinct things we need to accomplish at the same time if the network is going to be sustainable and useful to society.

So one thing is that you would like information to be curated, where it can clearly separate objective reality from the subjective analysis of it and give people a diverse set of viewpoints and understand that stuff is nuanced. So if they get that, then you kind of get rid of the fake news. You also get some consensus in the network of baseline facts. Because right now we live in a reality where people can't even agree to basic things. Some people think coronavirus is a hoax. Some people think vaccines are poisoned, et cetera, et cetera. So there's just disjoint realities that people are in. It used to be we would have one set of facts. We'd agree on that. But then our interpretation of what those means-

BG: It's true. A lot of people really believed Donald Trump had the most people at his inauguration ever, and the New Yorker doctored those images. And of course, sometimes the mainstream media may have distorted something about Trump, but the thing is, that's like an image, right? And people didn't believe the photograph, they believed the photograph was fake. And when you're at that point where people don't believe the photographs, then it's very hard. Then you have to be on the ground there, observing it in a sort of very clear state of mind to believe anything.

So, I mean, I'm not even a realist or materialist fundamentally. I don't know if there is an objective reality. But what people are doing is they're not thinking in a clear and coordinated way about this belief they have or this thing they'd been told. What evidence is that grounded in? What's the process of grounding the abstraction or the claim in the evidence? That process is broken. And it's partly because of AI and advanced informatics tools. Because you can make a deep fake. I mean, it's actually hard to tell if this video is Goertzel and Hoskinson or is this video a deep fake of Goertzel and Hoskinson put up by someone to troll all of us. It's not immediate seeing is believing to tell that you have to think.

CH: Oh yeah. Like the Collider, George Lucas, deep fakes are extraordinary. And that's last generation technology. Where they're going in a few years is going to be socially very damaging because you'll have these perfect simulacrums of major figures and there'll be saying and doing terrible things. So that's the first part, the curation, go ahead?

BG: You need the social network to tell bullshit from reality. So if the social network is broken, then you can't tell because you can't tell by looking, you can only tell by what you read and what others are saying, right?

CH: Right. And I think that's why they're proactively de-platforming people and controlling flow of information because there's a political terror about the consequences of deep fakes and what they're going to do to dialogue.

BG: Yeah, the point they're going to come to.

CH: Yeah. Put a pin in that because there's two more points. So, as I mentioned, the first is just the curation of the information itself. And putting it in a way that it promotes instead of siloing idea flow, idea quality, separation of objective reality from subjective reality. And then when you're looking at the subjective to give you a spectrum of viewpoints, almost like a next-generation Nolan chart to show you different viewpoints.

Okay, so then second there's clearly a data economy that exists. And surveillance capitalism is not just a nice term. It's a multi-billion perhaps trillion dollar economy. It's very valuable to society in certain respects. It allows you to micro target people. It allows you to have more friction-free commerce. You get the right products to the right people. So there's a huge advertising model and that shouldn't go away, but it should respect the privacy of the individual.

So there's been a lot of attempts to explore better ad models like with Brave and BATs, for example. I think whatever social network you create, you have to move in that particular direction where people are able to monetize their data and preserve their privacy, and actually get a share of the profit from the interactions that they have. And then the third design goal has to be the infrastructure itself is horrendously expensive to maintain. I mean, you're talking about petabytes of data. All these systems have N squared plus interactions. And so as your social network gets to a billion people, that quadratic complexity becomes very difficult to curate and manage. So the computational cost of that infrastructure, there's a reason why Google is so big and Amazon is so big and Facebook is so big.

So you somehow have to figure out how you subsidize a decentralized distributed system to curate and store all of that information. And you actually have to make data and users an economic actor, or they get pruned out if they don't contribute enough to the system. And we haven't quite figured out how to do that in a much simpler sense with just smart contracts and these big systems.

I mean, you see things like IPFS and Gollum and other attempts to distribute network and data and storage. But if those protocols are imperfect, and when you talk about a social network, you talk about people posting videos, every day, 4k videos. You talk about people posting pictures every day, sometimes 100s of them, millions of meaningful interactions, even a small clique. If you take an extended family, that's going to be over a month's time, probably a million plus interactions of various things from likes and thumbs up. And then you're adding these intelligent agents that also have to do an enormous amount of processing on a regular basis. And those agents are only going to get more sophisticated and be interacted with a lot. So you have to have a lot of that be handled by the edges, the end user.

BG: Yeah, yeah, yeah, absolutely. And that's hard. I mean, we've been working on that with a project called NewNET, which is spun off of SingularityNET. And I think we understand a lot about the architecture that has to be there and about how this sort of split up machine learning algorithms for this sort of a hardware infrastructure. But there's a lot of work to be done there. There's a lot of avenues for inter-operation of NewNET, SingularityNET, and Cardano there. But I mean, it's hard. It's hard to do computer science and software engineering. And on the other hand, obviously Google and Amazon and Microsoft have solved a lot of really hard large-scale software engineering problems, different ones. But I mean, I think with a fraction of the effort that they put in, I think we can solve that problem.

CH: Yeah, they're cheating because they always have a trusted third party. And so that massively simplifies your protocol. Their problem is easier. This is a harder problem. But on the other hand, computer science and hardware have both advanced a lot since they started doing what they're doing. But yeah, the incentive engineering aspect, incentive design aspect is quite critical and quite fascinating and exists for end users and also just within the developer community. Because I mean, what you see now is the significant majority of AI PGS, and we're going to work for these big tech companies. Or start a startup, which then gets acquired by one of these big tech companies. So the incentive structures of end users and of developers have sort of been channeled. They've been channeled around these large tech companies, which is an amazing achievement. I would be proud if I created one of them. On the other hand, it's not optimal, but it's doing the course of society.

And I mean, this is one thing that interests me, in our own collaboration over the next few years. I've been working with my team in SingularityNET to architect a five-year tentative plan for how to roll out and grow SingularityNET on Cardano platform. I mean, part of this involves the AGI token, the new AGI on the ada token that we're working to launch as a new version of Singularity AGI token. Because we need the AGI token to be the right sort of incentive mechanism, largely on the backend. For AI algorithm developers and for AI application developers who are building these applications backending on the AI, you need the incentivization there to work right in order to create the systems that will be creating the right incentive structures for end users.

BG: And I think things like the Catalyst Program within Cardano or a very interesting step there. I mean, where in Catalyst community members democratically vote with some liquid democracy mechanisms that they vote on, which Cardano projects should get some tokens. And I've been watching and participating now, and then on the Catalyst discussions. And I want to do something that's a lot like that with some added dimensions, for SingularityNET on Cardano for fostering the community and expanding the community to build AI applications on our shared decentralized network. Because you need the right incentive structures on all these different levels and they need to coordinate together, which is hard. But I mean, there I think Tokenomics sort of gives you an advantage over what the big tech has because it's more scriptable and it's more flexible than the money and stock options and the incentive mechanisms they have.

CH: Well, what's so cool about Catalyst is there's at the end of this year, going to be at least probably a $100 million worth of value that's available to the community. And the partnership with IdeaScale is just the beginning. We keep adding more and more firms to assist us with figuring out how to build a productive voting community, because it's not just the raw participation. So we say, "Hey, I think about two, 3% of ada holders are right now in idea scale, Because it's still kind of in a beta form. Our goal is to get that to 50% before the end of the year." But then we were trying to identify what meaningful participation means?

Because I would argue the American election system is not meaningful at all. You just show up and vote, but whether you spent hours thinking carefully about it, or you just voted randomly, it doesn't really matter. And the system doesn't differentiate that. So you end up with very poor outcomes and rational ignorance and a race to the bottom, effectively. So, meaningful participation is something we're definitely very interested in. And our hypothesis is that's going to lead to significantly better funding outcomes. So our return on intention is quite good for the system.

It gives you this M & M thing. It feels so empty without M & M, maintenance and moonshots. So maintenance means that you can maintain the system as it is and iterate and refine and evolve, and moonshots means that you have enough money to go pursue a high risk, high return research. And most great societies do this through some vehicle. It can be the Horizons program, the European Union, or it can be DARPA in the United States where they say, "All right, we're going to throw a bunch of money at some crazy stuff." And the odds are, it's probably not all going to work out. In fact, we seldom get exactly what we want, but then every now and then, we get fiber optic cables and satellites and the internet, and we get self-driving cars, and we get CALO and these other cool things.

The value to any DApp that comes over to Cardano is that you get to reuse the catalyst stack at some point, and then you can start entertaining, "Well, what does a treasury system look like within our ecosystem?" So, let's look three, five years out into the future, and let's say SingularityNET's gotten a lot of adoption. There's tons of transaction volume. You could put a slight tax on each transaction that can go into a treasury system for all the AGI holders. And then suddenly, you now have a mini catalyst just for AGI, and you can follow your own M & M strategy. So one part can say, "Hey, we just want to add more agents and more capabilities," and the other part can say, "Let's go tackle a super hard problem in the AI space." And it's really risky to go chase that problem. It may be the Holy Grail AGI, or it could be a subset of that or a compositional subset where you can decompose that problem to a collection of subproblems, and you're solving one of them. And if you fail, it's okay. And if you succeed, that solution lives in the open domain, and it's not controlled by a company. It's controlled by a protocol, so it's ubiquitously accessible.

BG: So with what we're planning out now with a certain amount of AGI ADA tokens, I think we can do something catalyst based that can help get AI developers on the SingularityNET on Cardano platform and can help build toward both applied narrow AI in domains from social media to medicine, to DeFi as well as other components toward AGI. But there's also much bigger things. Like if you think about it, we're competing with these trillion-dollar companies, right, so I mean, eventually, we need custom hardware for decentralized AGI. If there is enough usage, as you say, a modest fee on usage can, can drive catalyst-based funding of research. And I mean, you could fund the design and prototyping of decentralized AGI chips, right?

I mean, ultimately, we need to be seeding these exponential economic growth processes to the point where there's more wealth in the decentralized AI ecosystem than there is in the centralized AI ecosystem, which sounds very fanciful now. But I mean, I'm a lot older than you. I'm old enough to remember the computer 

companies were like Honeywell right? No one believed that PC companies were going to supply them, let alone internet companies like online ad agencies. Right? But this is how things go. And I mean, in the same way, the potential for network effects and exponential growth based on the right incentive mechanisms on multiple layers... The potential is there for a decentralized AI ecosystem to grow much bigger than the current trillion dollar companies. I mean, you just need to see the right growth processes in place. And I think, between our communities and codebases, we're able to see what those are right now, but of course, getting that seeding to work involves an endless number of difficult subproblems, both technological and human.

CH: Right. Well, that's the value of trade. Bob makes the spear, and Alice makes the rope. So one of the things we're trying to focus on in Cardano is abstracting the toolsets and capabilities of the protocol so that each DApp that comes can reuse that, and they don't have to be a domain expert.

BG: That's what got me to fall in love with Cardano in the first place. It's like, this is actually a reasonable software architecture, right? I mean, you're using functional programming. You're breaking things down into pieces. So if I want to take some AI algorithm and make it do homomorphic encryption or multi-party computing, so it runs in a secure and scalable way, I don't need to write all that code myself. There's actually tools within the blockchain infrastructure that are useful as code when you're on the AI level. I mean, Ethereum is super cool. Launching smart contracts into the world was a landmark thing, but I mean, the Ethereum codebase is not like that. There's nothing in there you're going to reference or use within your secure AI layer.

CH: Well, the computation model is just wrong. It's got a global state, and so you can't grow beyond a certain amount.

BG: It's supposed to be a world computer, but you cannot build a functional world computer that way.

CH: No. You have to go from global to local. And then you just have so many problems in that model. In fact, we just had a lecture this morning with Manuel Chavravarty talking about the differences with the extended UTX cell model to the Ethereum style accounts model. And we'll publish that video probably next week, but it just becomes so obviously self-evident that while it's a great proof of concept, the system... First, it can't scale. And second, the use of other utilities comes at the same resources for everything. So whether you're using a voting system, or you're using a stablecoin or a DEX, it all comes from one pool of finite resources. So if one of those resources gets over consumed by a Crypto Kitties, it makes all the other resources in the system more expensive. And that's a bizarre and asinine model. If a catalyst, for example, runs as a side chain of Cardano's... So let's say we have tons of DApps bombarding that, using that for the voting systems for their DApp, that will have no impact at all on the main chain performance.

BG: A hundred US dollars in gas for you to swap transactions.

CH: I know.

BG: And how can you obsolete Wall Street that way? I mean, it's going to be tough, right? But on the other hand, I think the foundational algorithms to get around those problems are there in Cardano. And then, in SingularityNET, we have foundational algorithms for distributing and decentralizing secure AI. So, I mean, I think ingredients are there for what needs to be done. On the downside, none of us has the war chest that Google and Amazon and Apple and Microsoft do, so we have to work around that by being cleverer than them and designing the right incentive mechanisms so that you get positive feedback effects and network effects, and things can really grow. And I think that this year is going to be pivotal actually, but we're going to... I mean, you've got native assets coming out, and we'll be putting AGI token as a native asset, and then a few other SingularityNET spin offs as native assets.

But I mean, we're going to get to a flourishing native asset ecosystem in Cardano, and then SingularityDAO, which is a DeFi system we're building on SingularityNET, I mean, we can use to help coordinate getting liquidity into all these Cardano native assets. I'm super psyched about that coming out publicly because not many people are thinking about what you can do when you have a real programming language as a smart contract framework, which security by design is built in. So, I mean, I think we're really providing stuff that is prepared to explode in an incredible way in 2021.

CH: Yeah. So first about the treasury management, Tesla 2008 was a day away from bankruptcy, and now it's worth more than Toyota, Honda, Nissan, and Ford and GM combined. I mean, it's just crazy how fast they grew. So treasuries can grow exponentially if you get to a certain... It's almost like a standing ovation model where a few people stand up and clap, and then eventually you hit this point, and then everybody just gets up and claps. And it's the same thing, I think, with capital and companies. There's a few pivotal moments that you have where you're just right at this explosive growth, and then boom, the hockey stick happens, and then suddenly you have a lot there. And I think that's happening in the crypto industry. I remember when we hit a billion dollars with Bitcoin, and I was like, "Wow, this is incredible." We could never fathom a trillion dollars. It was a crazy concept, and that had happened within eight years of that point. It took nearly five years for it to get to a billion. So it's extraordinary how quickly things can grow.

Then in terms of the collaboration, getting to that, Plutus is coming very soon, and we have this test net coming out. What we're doing is we're going to beat the hell out of it. So we'd love for your guys to beat the hell out of it with the SingularityDAO.

BG: Beat the hell out of it. That's right. Yeah.

CH: We're a little easier because we have the Hard Fork Combinator, but your mistakes tend to sit around forever. Like we made a lot of protocol design mistakes with Byron, and we still have to support them. And so we found a really nice way of doing that. But when we released version one of Plutus and the extended UTXO model and native asset standard, that's probably not going to be perfect because nothing is. As an engineer, version one's there, but yet we have to be backwards compatible. So when you go to version two, you still have to support version one. So to me, it's super important that we get as many people as quickly as possible, beat the hell out of the native assets standard, beat the hell out of, especially Plutus, before we do the next hard fork to bring that in because I would rather not be backwards compatible with obviously wrong things as we are with Byron.

So it's great to have you guys around. I know that the code you're going to write is very novel, and it's also going to push the system to its limits. And you're going to create a very strong demand for performance and scale, I think. And I can already see several areas where we would like to use AI, for example, transaction fees. We have this fee parameter, and that's right now set with the update system, so the minimum transaction fee is a DDoS parameter. It'd be so cool once we have Oracles and DEXs within the system, and we have some notion of the value of ada relative to the US dollar, to create an automated transaction monetary policy that can take those data points and compare them to other networks real-time, and then try to make sure that we always have a compatible-

BG: This is actually a subtle point that we've been discussing between SingularityNET platform team and Cardano platform team, right? Because I mean, the transaction framework for Cardano now, and that's planned for common native assets, it's fine from what we're doing with SingularityNet at this moment, but if we want to go to a swarm AI or microservices model, where you have a whole bunch of little AIs that within the second, one AI is consulting others to create others. I mean, if you really want to get AI by this dynamic microservices architecture, I want to have this using the blockchain rather than all off on the side. I mean, you need a way for some sub-networks to have substantially lower transaction fees, but then you need some system that's intelligent in some sense to regulate and moderate that because you still need to protect against DDoS attacks and then all sorts of other things, right. So there's a lot of areas like that where some machine learning, participating in the infrastructure can help a lot. And one of the things it can help with is to help make the system better able to manifest the emergence of higher levels of intelligence and learning, right, so you got a lot of positive cycles there.

CH: Yeah, and you want it to be deterministic yet dynamic. And you would also like it to be globally aware of competition. So you'd like the agents to be able to parse all the competing blockchains and look at their monetary policies, look at their transaction policies or transaction rates and their relative values to each other, and then be able to pull that into Cardano and form a transaction policy based on that.

BG: It is there, right. I mean, the data is there online. You can download it into your AI, and I think that's quite feasible. So, yeah, going back to decentralized social networks, where we started, I mean, there's been, as you know, and you've looked at this in more depth than me even... I mean, there's been loads and loads of attempts to make decentralized social networks. There's dozens of cool projects started by smart well-intentioned people with the right vision. Obviously, none of them has yet become the next Facebook or Twitter. I mean, some like Minds.com from Bill Ottman, I think, is really cool, but I log on there not yet as often, even as I log onto Facebook, which is not that often, right. I mean, Mines is great. It just doesn't have such a critical mass of people yet, although it's done a way, way better job than the vast majority of decentralized social networks, right?

So how do we get Minds and Everipedia and dozens of other decentralized social network platforms and the new ones that haven't been heard of yet... How do we get these to really take off? And I think we share the conclusion that a lot of what's needed there is to make the underlying stack more amenable to lower costs, larger-scale operations of the needed kinds, both in data storage and processing distribution, and then the distributed AI, also. It's interesting, Jack Dorsey from Twitter has seen this also, and they're looking at making a decentralized protocol and reorganizing the Twitter stack. The question there is, can you really make that work with incentive structures that are implicit in Twitter as the company that it is?

CH: That's why I separate the base protocol from the interface, like what Steem did. They had the Steem protocol and then Steem at the interface, and their problem was that they didn't have a full end to end monetary policies, so they had value leakage. There was no incentive to buy the token, but they used the token to curate information. Had they solved that problem, it would be still around and much larger, but I think that Twitter can survive with a decentralized social network protocol because it would just be a very popular, curated interface to it, and they'd still have their network effect. It's just the customers, and the data would be ephemeral. They could flow from one interface to another interface and get that same experience. The problem right now is you have to rebuild the network effect every time you launch a new one of these things. Every time we want to do an internet application, we have to completely rebuild the internet underneath it. It's a preposterous thing, right? Yeah.

BG: It makes sense, and I think it's visionary of Jack Dorsey to even entertain the notion, right? I mean, not many corporations of that scale are willing to-

CH: Well, it's a proactive solution to a big problem he has because if he plays censor and chief and he has to de-platform people from the protocol, then he can never win.

BG: I wouldn't want that job either because, I mean, you got people that are clearly colluding to kill someone. Fine. You ban them. You have people who are saying stuff that's nasty but not yet criminal. And then I don't want them to be in the job of telling what's too nasty and what's okay. I mean, court systems aren't perfect at that, but I mean, they've been honed for that over significant periods of time, and you don't want to have to do that at fast speed and large scale as part of operating your tech company. And I mean, none of these tech companies actually want that job, right. That's not why they got into the business, like how can I censor people's political speech? So, I mean, of course, if things can be reorganized so that that job is done by the community for the community, rather than having to be done by the CEO. I mean, that's far, far better. And the community won't do it perfectly, but actually, it will do a better job than these centralized authorities. And I mean, it's completely possible to do that.

We did a lot of simulations of Singularity 's machine learning-moderated reputation system over the last couple of years. You can make decentralized, AI-guided rating and reputation systems and you can tune them and you can see if I tune it one way, you get information silos, if you tune it another way, you just get trolls and spammers and so forth. If you tune it in a different way, you get a system that self-policing and fosters a healthy level of interaction. And you can do this to get networks that self-regulate without anyone giving top level control. If this is operating within the current global political systems, which I have my issues with too, as I'm sure you do, but it's there, then you still have top level control over things that are clear crimes, according to the nation states people's bodies are sitting in, but you don't need top level control for anything else.

And I think that not just would avoid garbage like minds are proud of being de-platformed. It would also create something that's a breeding ground for positive and creative and beneficial content in which people's minds are being nudged toward positive growth, rather than channeled into this site and click on this ad. I think potential is there to do that. What's a little scary is that handful of us in the decentralized AI space, the two of us, probably understand more about how to achieve this than anyone else on the planet. It's actually a very big and significant problem, both in terms of setting the stage for a positive singularity and just making life less shitty for humanity on the planet at this moment.

CH: The one thing I've always learned from being a cryptocurrency guy is that incentives are king, and it's always been an incentives problem. How many people were, in 1990, being paid to think about social networks? You'd probably be in the sociology department at Harvard or something that, or toying around in an MIT AI group or something. But it wasn't a real job and nobody would understand. How many people who are experts in how to build effective social networks are floating around now? There's thousands of them. They're fabulously wealthy. So if you show that in a free market system you can achieve great wealth, or at least the prospect of great wealth by building a system of a certain design, then you'll end up getting a lot of it.

The cryptocurrency space was exactly the same. How many people were experts in Bitcoin-like systems in 2010? Very few. Now in 2021, now the existing chairman of the Securities Exchange Commission, Gensler, he was lecturing at MIT on cryptocurrencies. That's how far we've gone in just such a short period of time because the incentives are right. So when I look at this problem, I say, "Well, how do we get the incentives in the right way to encourage a large clique of people to come in and actually start applying serious hardcore brainpower to these types of problems?" So it's a first mover situation. Now, to the minds and these other guys, to that earlier point you brought up, I look at them almost like mechanical horses. When we were first thinking, how do we build a better horse? If we all let's build a robot horse, or a steam powered horse or something like that. Well now there's this automobile idea that we've been toying around with. Maybe that's just a fundamentally more competitive or better model.

Or similarly, when people are thinking about vacuum tubes, you can certainly optimize them, and I'm sure you could build a much better vacuum tube today than they were building back in the 1940s. But obviously that was superseded by the transistor. So similarly, when you look at social networks, we have to say what is our automobile moment to replace the horse?" And minds is not it. I think that if those things existed, they'd actually just be worse than Facebook or Twitter. They'd get far more siloed. The three problems I outlined, the great moderation, the incentives models being aligned so that people can actually make money and produce money and do useful things with the system, and the infrastructure funding problem.

You have to solve all three of those with one protocol design and one incentives design. And if you do that, then it's going to be this massive beacon that will attract tons of people to come in and start working on an augmented system and evolve it. And it doesn't matter if it starts very small. It'll go very viral and eventually get to that Tesla-style hockey stick, when Tesla figured out the entire model. Plenty of battery-powered cars before, but their particular model was the one that everything came together and then it had exponential.

BG: In terms of tokenomics systems, it's quite interesting. Because having a unified scheme and dynamic for promoting the right incentives doesn't mean just one token. So you're sculpting multiple tokens in a multi-token ecosystem where they interoperate. So say we have ada, we the AGI token on ada, and for, say, a decentralized social network running on ada, leveraging SingularityNET AI, potentially could involve a different token for a certain purpose within that network. You have to think through the inter interoperation of these different networks. And I think that this is one of the things I'm most excited about in collaboration between the two of us and between SingularityNET and Cardano. I think you guys have done very well in thinking through incentive structures and how they boil down into tokenomic structures. I look forward to some cognitive synergy among us on that.

CH: We learned how much we don't know. We started this program at Oxford with Elias, and he's an algorithmic game theorist. He won the Gödel Prize and all these things. He's a really good guy, and he's got some... Yeah, Oxford, he's got some really good graduate students too, so we said, "Okay, between him and his graduate students, we're done. Put a fork in it. We should easily be able to tackle all these consensus incentives problems in Ouroboros." It took two years to refine the entire incentive model just for a consensus algorithm, and now we're talking about incentives for the curation of information. So it's going to be fun to collaborate. I agree there. It's such a hard problem.

BG: Curation of information that's being created by just decentralized AI algorithms, not just of existing information.

CH: Yeah, because you need to create demand for a token and you need to be able to use that token because it's demanded and it's valuable to incentivize a certain collection of human behavior. You also need to be able to use that to incentivize people to interact with agents in a way that they could become trained to become good cognitive crutches to reinforce the network, and then that token also has to incentivize the hosting of decentralized infrastructure that eventually can scale to petabyte scale storage and huge network capacity and massive computational capacity. It's a tall order. It's a lot of incentive engineering, and that's why I don't think these networks exist yet.

BG: They don't. As you say, once it's gotten to a certain level, the potential to gain both personal wealth and to help promote broad benefit to a huge degree, those are both there in a very clear way, which I think can cause a rush of talent into the space of decentralized AI and decentralized AI guided social networks. We're at a pivotal moment now, I think, in terms of both the readiness and even eagerness of the world for these technologies and the existence of the needed tools, or at least a significant fraction of the needed tools to create them. This conversation is occurring in a quite interesting time.

CH: But the good news is that there's a lot of almost right attempts, like the creation of Bitcoin, we had HashCash and bit gold and DigiCash. They were wrong, but they were wrong in the right direction, so you just had to pull them along enough and then eventually it fell through. So you have things like BATs, and I mentioned that before, and suddenly now you've created demand for a token. Steem had enormous growth, but the problem was there was no demand for the token, but there was good payment for content creation and curation. So they got a lot of users, but they had too much value leakage, so they couldn't sustain network value and then the system fell apart.

I almost felt if you could combine BATs and Steem together, then you've created a feedback loop where the system will sustain and it'll continue to grow at a very rapid rate. However, they had to use the token to subsidize the actual running of the infrastructure. They didn't have a sustainable model there. So even though it was the protocol Steem and Steemit was just the company, the Steemit company had all the power and control because they were the people that could afford to run that protocol.

BG: We've got a hive now, right? I mean, that's the beauty of open source code and decentralized communities.

CH: It's a Pareto problem where a small group runs the vast majority of everything and there's no economic diversity there. With Cardano, we spent five years on Ouroboros because we wanted a system that would get naturally more decentralized over time. So as the price of ADA increases, the K factor increases, and then suddenly you go from 1,000 to 10,000 stake pools and then 100,000, and then all the infrastructure is federated with those stake pools, so suddenly you have 10,000 Hydra channels and suddenly you have 10,000 oracle entry points et cetera, et cetera. So the system, when we get to Bitcoin scale, could have 100,000 stake pool operators that run that, and that scales quite nicely.

BG: I'm sort of thinking into the growth of SingularityNET during the next phase. I think that the platform as we've built it now does something good. If you create multiple AI agents all over the place that collaborate and cooperate to solve hard problems. But we need to architect the next stages of development in a way that will incentivize massive increase of utilization of the platform using AGI ADA, but also that will ensure that increasing decentralized control of the network happens along with this massive increasing utilization. I think we can do it, and I think a lot of the thinking you guys have put into growing Cardano was actually helpful there in ways that we probably don't have time to explore in this podcast.

CH: Well you get the democracy stack for free with Catalyst, and you also get the decentralized infrastructure for free. One thing we'd love to do is see if we can get outsourceable computation. I've been following that for God knows how many years. Pinocchio and Geppetto over at MSR, can you do the computation on an untrusted computer, but then provide a proof that the computation was done correctly? And then you know that whatever the result was given is right, regardless of who did it.

BG: That's there on the computer science level, but it's not yet there on the scalable, usable software level.

CH: We have some proof that perhaps these algorithms work, but a lot of them are exponential time.

BG: One of the things I've been doing with my non-existent spare time is going through all the core cognitive algorithms of OpenCog, which is the AI architecture I'm working on, expressing all the core algorithms of OpenCog in terms of Galois connections over metagraphs and the chrono morphisms and stuff. So you get the right elegant formalization of your core cognitive algorithms. And then once you've done that, then you can deploy the kind of math you're saying so that this core AGI computation could be done by outsourced computing. So the math and CS is there for a lot of these different things, but there's a number of stages yet to go through before that kind of thing is rolled out scalably.

CH: That's an interesting mathematical expression. Do you deal with a dependent type system?

BG: It's an independent pair of consistent probabilistic type systems, so yeah.

CH: That's a mouthful. But can you prove anything interesting? You can show certain things that are isomorphic to each other or what you are looking for with those.

BG: We are working on that right now, actually. But this would probably lead us too deep down some usually interesting rabbit holes for a broad audience podcast.

CH: Okay, fair enough. All right. Well, Ben, this has been so much fun. I have another meeting I got to jump into, but I really enjoyed our time.

BG: Yeah, this is fantastic. It's both broad and deep, and I think decentralized social networks, it's both really important on its own, and I think we can work together to solve it, but it also highlights a bunch of other more general points, both about bringing Singularity and Cardano together and about just what we need blockchain and AI together to do. So yeah, very cool. Look forward to the next one.

CH: I guess a closing point is platforms tend to get defined by the killer apps that are on the platforms, and I'm very glad that one of the most meaningful and significant applications on our platform is SingularityNET. I would hate to see us be defined by Crypto Kitties or something like that. It's great to have you guys around. I think this collaboration is going to result in an enormous amount of evolution of our own platform and an acid testing of things in a way that's very productive for everybody. And my hope is you guys become one of the most successful pieces of infrastructure on top of a Cardano and it leads to a lot of user growth. And we're not just collaborating technologically. I think we're going to share some office space at some point in Ethiopia.

BG: The space has been found, actually. So our Addis team and your Addis team will co-locate.

CH: John was very excited about it, so I imagine the office is quite nice.

BG: It's in Bole, which is a great neighborhood. It was a very pleasant and surprising coincidence that we actually both had flourishing teams in Addis Ababa contributing to the development of our various platforms. Very cool that maybe the next time we meet face to face will be over some injera in Addis.

CH: That'd be a lot of fun. That'd be a lot of it just to have to get rid of the civil war and the COVID, but those are just minor technical details,  All right. Thank you so much, Ben.

BG: Great. Thanks a lot.

1

2