Table of contents
Welcome to the first project report for the Lilypad Project. We're developing components for, and prototypes of decentralized compute networks (DCNs) on top of the Bacalhau project.
CoD Summit^3 in Boston
We recently got back from CoD Summit^3 in Boston, where there were lots of great talks and meetings about the topic of decentralized compute over data networks (DCNs), including this great talk from Juan Benet & Molly Mackinlay, leaders at Protocol Labs, in which they described their vision for compute networks as an "L2" on top of L1 networks. We were lucky to also have a long and detailed design meeting with Molly & Juan the following day, to discuss how to build future DCNs.
Design Criteria for DCNs
Here's an slide from Juan's original CoD talk in Paris back in '22:
So, the the Bacalhau project has already developed several of the components in the green boxes on the left: tooling that is common across many systems (SDKs and APIs, high performance network scheduler & pipeline schedulers, monitoring infrastructure etc). Now it's time to start building components that are specific to DCNs: protocols, schedulers, verification mechanisms, incentive structures, and ultimately a prototype testnet for a running DCN. By doing this, we'll learn a lot about challenges and solutions we haven't foreseen yet, which will help many DCNs be successful.
Where in the triangle shall we begin?
In the talks linked above, Juan describes this triangle of decentralized compute: the idea being, that you can't have all three of Privacy, Verifiability and Performance and that various tradeoffs exist at different points of this space.
For our first prototype DCN, we decided to start with a variant of optimistic with economic staking (with game theoretic staking/slashing, assuming non-malicious, utility-maximizing agents). This is because it provides the best tradeoffs for performance on mainstream computation, which should enable easier adoption.
Key idea: Interplanetary Consensus (IPC) enables on-chain scheduling
Before we went into the Boston meeting, we were expecting to reuse the existing distributed scheduler in Bacalhau, namely
bprotocol. However, bprotocol does not currently have an anti-cheating mechanism built-in, and building one without blockchain primitives would require implementing some sort of consensus of our own into the scheduler. One of the key observations from the meeting was that leveraging Interplanetary Consensus (IPC) instead will enable high performance subnets of compute nodes which run on-chain scheduling.
Literature review yields existing promising results
We've spent a lot of time in the last months doing a review of the existing literature around DCNs, hoping to build on the shoulders of giants rather than reinventing the wheel. We found a particularly promising paper called Mechanisms for Outsourcing Computation via a Decentralized Market.
The paper proposes a set of actors which implement scheduling on-chain, and even includes sample code for the smart contract in Solidity and the actors in Python.
Putting it all together
So after we got back from Boston, we quickly realized that we had all the parts we needed to rapidly develop an end-to-end prototype of an on-chain scheduler. We have:
IPC subnets which are EVM compatible
Mechanisms paper with EVM Solidity contract & research code in Python
Bacalhau client, requester (for now) & compute node with IPFS integration
What we did:
Got Mechanisms code running (research code, but it works!)
Replaced geth with hardhat (faster, more dev-friendly)
Plugged Mechanisms ResourceProvider into Bacalhau instead of Docker
And 🎉 we have a rough on-chain scheduler prototype:
Connecting it to IPC
So far the demo above runs just on a local hardhat instance, but we're also collaborating with the IPC team who have provisioned a test subnet for us. We're currently working on deploying the same smart contract in the demo above to the subnet, so that we can provide feedback to the IPC team.
This will also allow us to test the performance of IPC subnets: hopefully they will ultimately enable sub-second scheduling, much faster and cheaper than if we had to wait to finalize every transation on the L1.
Work in progress connecting to subnets
So far our initial attempts to connect the Mechanisms code, which makes raw RPC calls with pycurl, to IPC has been stymied by the fact that the pycurl code doesn't support unlocking wallets with local private keys. We are working around this by using hardhat to deploy the contracts, but this isn't complete yet.
Improving the Mechanisms protocol
We're investigating the feasibility of replacing the Moderator in the protocol with a consortium of nodes, to avoid the dependency on a partially-trusted third party.
Novel Game-Theoretic Approaches
We are exploring the possibility of augmenting traditional collateral requirements, which have been widely explored in the cryptocurrency world in general, and Filecoin in particular, with market-based approaches. Such approaches would allow for much more flexibility in a number of important areas, including lowering capital requirements to participate in the network and enabling insurance collateral pools. This collateral market could be implemented in the protocol itself, and the primitives used to create it substantially overlap with the market structure for compute jobs.
The Lilypad Team is supporting HackFS with a workshop on decentralized compute for Filecoin & FVM on June 6 and mentoring throughout the hackathon.
Eliminating the requester node in the prototype architecture
Develop use cases for the prototype
Continue literature review and iteration on protocol
Did you find this article valuable?
Support Lilypad Network by becoming a sponsor. Any amount is appreciated!