๐ Lilypad Project Report: May 22, 2024
Incentivized Testnet!, HackFS, add a module to Lilypad, and more
Image generated with Stable Diffusion v0.9 run on Lilypad: lilypad run sdxl-pipeline:v0.9-base-lilypad3 -i Prompt="two animated robot friends catching up" -i Steps=200
๐ Overview
As the Lilypad (Milky Way) Testnet scales, our team has been focused on optimizing the network. Resource Providers are now incentivized to join the network as early as possible with a rewards program for processing jobs! New AI/ML models are added weekly (to run on Lilypad) and the developer community continues to grow!
A huge thank you to Resource Providers running on the Testnet and to devs building systems with Lilypad as the backend ๐ชท.
Lilypad Incentivized Testnet Phase 1 will launch in mid June (pushed to mid June for testing)! The goal is to reward early Resource Providers on the current Lilypad Milky Way Testnet. On-board idle GPUs to the network and earn points for running AI/ML Inference jobs. Points will be rewarded with LP tokens upon Lilypad Mainnet launch (Q4 2024). More info on this very soon!
Check out the Lilypad Linktree and FAQs to learn more!
โ๏ธ Engineering Update
Scaling the Milky Way Testnet
As additional Resource Providers on-board, our team is focused on managing this growth while ensuring network reliability. This involves troubleshooting systems used in the Resource Provider nodes such as Bacalhau and IPFS.
Internal metrics systems have been scaled to monitor the network across a range of data sources. The metrics system provides the team a detailed overview of network conditions, nodes on the network, and more using Open Telemetry.
At the same time, new AI/ML models continue to be added by our team and the community leveraging this additional compute capacity.
Cheers to the Lilypad engineering team for supporting this huge push!
๐ฌ Research Update
Investigating requirements for different models
As a consequence of our model onboarding process for Alphafold and LLaMA3, we conducted an in-depth investigation into the requirements for deploying AI models. This research encompassed hosting, deployment, pricing, and cost analysis, with the aim of understanding the necessary steps for hosting AI models, identifying current solutions, and determining the requirements for deploying various models. We have now completed the onboarding process for Llama3, and Alphafold is expected to be ready for deployment soon.
The onboarding process for Alphafold and Llama3 also led us to explore new research dimensions, specifically the concept of determinism in AI models and its relation to validation. This investigation was prompted by the need to integrate with Lilypad's verification system and, more broadly, any verification system, which surfaced important requirements for the protocol.
Lastly, we started exploring the potential of decentralized zero-knowledge machine learning (ZKML) deployment, which could provide both verification and privacy. Building on this, we will be pursuing a small research epic focused on the ZK and determinism/validation questions, which will result in some new documents in the Google Drive. Additionally, we have made progress on our content creation efforts, having completed drafts of the AF and L3 blog posts, as well as an outline for a DeSci micropub.
๐ Lilypad "All the Things" Update
HackFS
HackFS is on-going with teams competing for $150k in hackathon bounties! Providing $5k in bounties and use cases including serverless AI Inference, Lilypad has already seen a lot of action!
If you missed the opening presentations, catchup on youtube. Ally and Steve from team Lilypad walk through the Lilypad compute network and demo the CLI.
Add a model to run on Lilypad
Lilypad provides serverless, distributed compute for AI/ML jobs. In order to run a model on Lilypad, a Lilypad module must be created that runs the desired AI model. Learn more about building a custom job module to run on Lilypad with this guide! Check out the Lilypad docs for more info.
Once a module is created, it can be run on the Lilypad serverless, distributed compute network using the CLI. Developers needing to run models can use Lilypad as their backend and call jobs from a frontend with the Lilypad JS CLI Wrapper!
Smart Contracts on Lilypad
Run AI/ML inference from a Solidity smart contract using Lilypad! Check out this awesome Hardhat template for running Lilypad smart contracts from a Dapp. Run jobs using popular models like Stable Diffiusion v0.9 and Stable Diffusion Video 1.0 from a Smart Contract.
๐ฎ What's Next?
The Lilypad Incentivized Testnet Phase 0 began last week and has already seen substantial growth from partners and individuals on-boarding GPUs. As Phase 1 of the IncentiveNet approaches, we are scaling backend metrics systems and ensuring the testnet is sybil-resistant. The goal of the IncentiveNet is to stress test many facets of the network and challenge mechanisms put in place to protect against malicious actors.
Join our IncentiveNet Discord channel to stay in the loop and get help. We look forward to feedback from the community!!
โ๏ธ Contact Us
๐ฌ Chat to us on Discord and follow along on X/Twitter for the latest news and updates!