🌐Capx Network [Symbiotic]
The Capx Network on Symbiotic is a pivotal component of the Capx ecosystem, designed to enhance the deployment, operation, and monetization of AI agents. This network acts as a resource matching engine, leveraging the Symbiotic platform to efficiently allocate AI resources, ensuring optimal performance and democratized access to AI technology. Additionally, Capx utilizes restaked ETH from Symbiotic to bring enhanced security to the matching engine, ensuring robustness and reliability.
Overview of Capx Network on Symbiotic
The Capx Network on Symbiotic integrates advanced technologies to create a seamless and efficient infrastructure for AI agents. By leveraging Symbiotic's capabilities and the security provided by restaked ETH, Capx ensures that AI agents can scale dynamically, handle complex computations, and operate securely and efficiently.
Key Features of the Capx Network on Symbiotic
Resource Matching Engine:
The Capx Network utilizes a sophisticated resource matching engine to allocate computational resources to AI agents.
This engine matches the supply of AI APIs and node infrastructure with the demand from AI agents, ensuring efficient resource utilization.
Decentralized AI API Marketplace:
AI APIs are crowdsourced from various providers, creating a decentralized marketplace.
This marketplace allows AI agents to access a diverse range of APIs, enhancing their capabilities and performance.
Scalable Infrastructure:
The network supports scalable infrastructure for running AI agents, enabling them to handle varying loads and demands.
Scalability ensures that AI agents can operate effectively, even during peak times.
Enhanced Security with Restaked ETH:
By utilizing restaked ETH from Symbiotic, the Capx Network enhances the security of its resource matching engine.
Restaked ETH provides a robust security layer, protecting the network from potential attacks and ensuring data integrity.
Secure Data Handling:
Data security is paramount in the Capx Network. Advanced encryption and secure computation protocols protect user data and ensure privacy.
The network complies with stringent data protection regulations, maintaining the highest standards of security.
Architecture of the Capx Network on Symbiotic
The architecture of the Capx Network on Symbiotic is designed to provide a robust and flexible environment for AI agents. Key components include:
Symbiotic Platform:
The Symbiotic platform provides the underlying infrastructure, enabling the Capx Network to leverage its advanced computational and security features.
Symbiotic's decentralized nature ensures that the network remains resilient and scalable.
AI Resource Matching Engine:
This engine dynamically allocates computational resources and AI APIs based on real-time demand and supply.
It ensures that AI agents have access to the necessary resources to perform their tasks efficiently.
Decentralized API Providers:
API providers contribute their services to the network, creating a decentralized marketplace.
These providers are incentivized through Capx tokens, ensuring a steady supply of high-quality APIs.
Security with Restaked ETH:
The integration of restaked ETH from Symbiotic provides an additional security layer to the Capx Network.
This ensures the integrity and security of transactions and data within the network.
Data Security and Privacy:
The network employs advanced security measures to protect data integrity and privacy.
Secure multi-party computation (MPC) and trusted execution environments (TEE) are used to safeguard sensitive information.
AI Provider to Customer Request Matching
Problem Statement :
Each AI nodeProvider can provide multitude of AI model access via api, these AI models can be
Closed source where this AI node provider can act as a relayer
Open Source where the AI node provider runs the model and provides inference from
the model.
These AI node providers charge the user on per API request to the provider The cost of such API request depends on multitude of factors such as
Resource availability
Reputation of Provider
Latency
AI model itself
Hardware cost incurred by the node provider
API request itself (tokens sent by the user)
These costs as such are never fixed and keep varying based on node providers state, availability etc. Each node provider may propose a different cost for the same resource over a period of time based on its condition and trust requirements.
Because of these varying costs it's the responsibility of the protocol to match the user's request with the best provider in case the user has no specific provider in mind.
In case the user only wants to deal with the specific provider to avoid disputes related to cost and trust requirement the protocol is also responsible for keeping the providers interest in mind and ensuring the users request satisfies all the requirements of the said provider.
Based on the above problem statement, the solution is required to run a decentralised matching engine and a policy enforcing network to verify each user request.
Three Possible Scenarios exists to design such a system
Completely trustless Matching Engine (Operator with backing from restaskers)
Disperser Based Semi Trustless Matching (Operator with backing from restaskers)
Smart Contract Based Trustless Matching (Smart Contract Based) + Operator based order validation
Each of these design choices comes with their own set of advantages and tradeoffs as discussed below :
Completely trustless Matching Engine (Operator based)
Participating Entities:
User
Operator Node (AI node provider)
Disperser
Introduction
In the following system, The job of AI node providers are played by Operators in the Symbiotic network. These Operators will subscribe to vaults which will be backed by Collateral tokens submitted by ETH restakers and $capx token stakers. The providers also run a leaderless auction mechanism which is responsible for matching users' requests to the best possible AI node provider service.
The following system has the following advantages
Addresses the problem which is One participant is allowed to act after everyone else
Surpasses the TPS issues of most chains and provide order matching with low
latency
Avoid wastage of blockchain resources as it moves the matching part to AI node
providers itself which provide high economic security
Better Value proposition for AI node provider as they don't have to waste money on
gas while providing their bids in the system
Avoid censorship disputes
Avoids Last look problem where a node provider may insert or cancel his bid after all
participants have done so
Completely Trustless
The above system arrives at the result in Three Rounds and creates an easily validated aggregate signature verifying them in fourth. Any Party (Disperser) can submit these signed results to the blockchain to finalise the auction while the user can query the disperser about the winning NodeProvider without waiting for the auction result to get finalised on chain
Assumption Made While Designing the Protocol
We assume that in a group of 3f+1 AI Node Providers, 2f+1 are "honest," which is the same requirement used by Byzantine-fault-tolerant consensus protocols.
Our assumption regarding synchrony is that all honest AI Node Providers can send messages to each other, guaranteed to be received within a certain timeframe, for instance, 1 second. This assumption is not critical for maintaining overall safety or ensuring liveness (to prevent conflicting auction results and guarantee that auctions proceed), as we rely on L2 for these aspects. The implications of a breach of this assumption are discussed later.
Additionally, we assume that participants' clocks are synchronised, similar to those in L2.
Symbiotic Network
Symbiotic is a shared security protocol that serves as a thin coordination layer, empowering network builders to control and adapt their own (re)staking implementation in a permissionless manner.
By Allowing the network to be run by decentralised service provider on leaderless auction the protocol creates incentive for restakers to do mutually beneficial deal custom to their specification using “Vaults” where restrakers can deposit their $Capx token and restaked Eth.
Operators can then opt in this network and earn their share based on the vault they subscribe to.
Detail of how the solution works
The system assumes that a user has placed his request with details to the disperser or on-chain (in case he does not trust the disperser).
If the order is placed directly with the disperser then the disperser notifies the AI Node Provider network or The Symbiotic operator network (BOTH ARE THE SAME) to start the auction mechanism and submit the result back to the disperser which it submits to chain meanwhile notifying the user about the winner so that the user can start communicating with the winning AI provider without waiting for the result to be finalised on chain. The above case assumes the user has Full Trust on the Disperser.
If the order is placed directly on chain (because the user feels his request are being censored or the disperser is modifying the request), The Node Provider network or the Symbiotic operator network (BOTH ARE THE SAME) are notified using the event emitted by the smart contract to start the auction mechanism and submit the result back to the disperser which it submits to chain meanwhile notifying the user about the winner so that the user can start communicating with the winning AI provider without waiting for the result to be finalised on chain. The above case assumes the user has partial trust on the disperser as it relies on the disperser to query about the winning node without the result of the auction being finalised on-chain.
If the order is placed directly on chain (because the user feels his request are being censored or the disperser is modifying the request), The Node Provider network or the Symbiotic operator network (BOTH ARE THE SAME) are notified using the event emitted by the smart contract to start the auction mechanism and submit the result back to the disperser which it submits to chain. The user waits for the auction result to finalise on-chain and based on the result finalised on-chain starts communicating with the Winning Node Provider.The above case assumes the user has zero trust on the disperser as it does not relies on the disperser to query about the winning node and waits for the result of the auction being finalised on-chain.
Auction Mechanism
Step 1 : Each Node Provider submits his signed bid to all other Node Provider
Assuming there are “3f+1” AI node providers in the network, they all sign their bids and send them to all the AI node providers in the network. Here “f” denotes the number of faulty nodes.
These bids are encrypted using threshold cryptography, i.e the bids are encrypted using a public key that can be decrypted by any collection of f+1 or more participants. This ensures no Node Provider is able to see the value of any other Node providers bid before submitting their own.
The result of step 1 is every Node Provider has a set of bids which contain the bids of all other Node Providers along with their own.
Step 2 : Each Node Provider share their bid sets to all other participating Node Provider
Each Node Provider communicates the signed set of all the encrypted bids (bid set) they received in step 1 including their own to all other Node Providers over a peer-2-peer gossip protocol. This results in all the Node Providers having a “bid view” over all the bids that each Node Provider has.
Reason for this bid set exchange is that an honest participant will only include a bid in his bid set if the bid arrived before the end of time limit set in the protocol, as determined by Node Provider’s synchronised clock at the end of step 1. Since there are at most “f” malicious/dishonest Node Providers, if a bid appears in “f+1” or more bid sets, at least one of them was from an honest Node Provider, and that bid must have been sent before the expiry of time limit in step 1.
After such an exchange any Node Provider can essentially find the perfect match to the user’s request and conclude the auction, but the Node provider selected to do this job needs to be honest and the protocol has no way to confirm that so it cannot trust the result from one Node provider.
Step 3 : Each Node Provider share their bid views and threshold decryption information to all other participating Node Provider
Each Node Provider communicates their bid view, i.e set of all signed and bid sets they received at the end of step 2 containing encrypted bids including their own to all other Node Providers along with their share of decryption information used in the threshold encryption scheme over a peer-2-peer gossip protocol. Any collection of “f+1” decryption information can be used to decrypt all the bids.
In terms of data sent through the network, assuming the total number of Node Providers denoted by “n”, Each bid view contains O(n) bid sets which further consist of O(n) bids. Each Node Provider must send this Bid View to O(n) Node Providers, for a total of O(n^3) data sent through the network in the worst case. However, each Node Provider should have the same actual bids from each other Node Provider in the network and the presence or absence of a single bid in a given set is just a single bit.
If all the NodeProvider are honest and the network is working optimally, all Node Providers should have every bid in their bid set, and every Node Provider should have a bid set from every other Node Provider. So essentially if a Node Provider indicating which bid set or bids they are missing in their bid view, in best case scenario they only have to send a single bit indicating if everything is there or just a few bits in case a particular bid was missing from a particular bid (set 2 was missing bid 4, everything else is there).
At the end of this exchange of bid views each Node provider has a set of “2f+1” honest bid views assuming there are “2f+1” honest Node Providers in the network and be able to decrypt them.
Working with only “f+1” bid views a Node Provider can prove they have at least 1 bid view from an honest Node Provider enough to find the best match and publish the result to on-chain. But publishing so much data on-chain is neither cost nor space effective, hence the Node Providers can only certify the best match and send them to the disperser to only publish the best result with the aggregated signature for which a scheme like BLS Signature can work. The reason for choosing this scheme: it's easily verifiable and the size of the signature does remain constant even with increasing the number of Node Providers in the network.
To issue this certification to the winning bid, each Node provider looks at all the decrypted bids views they have. A valid bid chosen by the NodeProvider from the bid view has to exist in at least “f+1” bid sets that make the bid view up. The winning bid is the lowest valid bid in that bid view.
A Node Provider nominates the winning bid of a given bid view if it is less than or equal to the winning bid in that Node Providers own bid view.
Each Node Provider constructs a threshold signature for each bid they are nominating, and sends this set of all nominated bids along with the signatures to the disperser generating a winning bid set with each bid having its signatures from all Node Providers.
Assuming all Node Providers are honest, the same bid will win in each bid view, and so each Node Provider will nominate only that single bid.
Step 4 : Matched Order Finalisation
Any bid that has been signed by at least “f+1” Node Providers is confirmed (assuming all Node Providers are honest, this will be a single bid only) and the user’s request is instantly matched with the bidding Node Provider. In some cases two or more bids may be confirmed (though this means at least one participant faulted in step 1 ), In such scenarios the disperser will decide the winning bid. For Penalising the faulty Node Provider it will publish all the confirmed bid on chain in separate smart contract so that the network can decide which Node Provider was faulty.
In the scenario where the L2 sequencer censors the auction result and does not include it in the L2 block, this history is kept with the disperser. For all subsequent auction results the disperser will also publish the result of the previous auction assuming the censorship is not indefinite in nature.
In this particular step the disperser can work without any trust assumption as each bid is signed by the Node Providers and it cannot tamper with them. Assuming the Disperser itself is censoring the auction mechanism / goes offline / tamper with the result , each Node Provider can publish the result on the task smart contract to finalise the auction mechanism and penalise the disperser themself.
Disputes/Faults That can occur in the system
The above system, while catering to latency and high throughput, also ensures that every participant in the system can always submit a fault proof as soon as they have enough evidence that a fault has occurred, most likely resulting in full or partial slashing of the offending participant’s stake. participants reporting fault proofs may receive some part of the slashed stake as a bounty for reporting.
There may exists four types of faults in the proposed system :
A Node Provider can encounter conflicting bids present in different bid sets or different bid sets present in different bid views. The reason for this kind of conflict is either intentional or at the least due to client-side issues as opposed to network problems.
If a Node Provider’s bid is absent in some collection of f+1 bid sets. This could either mean that the Node Provider was dishonest and was attempting to back out of a bid they sent in round 1, or, hopefully rarely, that the Node Provider was honest and it was network error.
The disperser has received multiple winning bid and instead of reporting the anomaly and choosing the best bid out of the conflicting bids the disperser chooses to match the user request with a suboptimal bid. This kind of fault is intentional in nature or at the least due to issues in the disperser.
Node Providers can also try to bribe the L2 sequencer or Disperser to censor the auction result, potentially several times in a row, before finally being submitted as discussed above under step 4. Since L2 sequencer is out of bounds from the protocol perspective, Disperser is still under the networks purview so it can be heavily penalised for censoring the auction. If an honest participant unintentionally faults due to network issues, somebody could use these increasing penalties to grief them, but the griefer would have to pay to censor the blocks.
Penalties for Various Faults :
Conflicting Bids from Node Providers : penalties for it can be quite severe, including full slashing.
Absence of Bid from Node Provider : The penalty for missing a bid should be higher than the value of the option to cancel a bid. However, this must be weighed against the occurrence of legitimate network failures and ultimately represents a quantitative issue. In extreme cases, a system of increasing penalties for repeated offences can reduce the profitability of this option and minimise its use.
Fault vs Network Failure Dispute resolution
If a Node Provider's bid set is missing “f+1” or more bids, we can assume that it lacks at least one bid from an honest Node Provider, given that no more than “f” Node Provider can be dishonest. Therefore, we can disregard this Node Provider's bid set when calculating absence faults, because they are either dishonest or experiencing a network partition. This approach should decrease the number of faults attributed to significant network disruptions.
Protocol Guarantees
Guarantee 1 : No Late Comers, For an AI Node Provider’s bid to be considered valid in any bid view and thus eligible to win, it must be sent to at least one honest AI Node Provider during Step 1 of auction.
Let's say an AI Node Provider fails to send their bid to any honest AI Node Providers in the first step of the auction. With “2f+1” honest AI Node Providers, this bid will only show up in a maximum of “f” bid sets in the second step of the auction, which falls short of the necessary count for it to be considered valid in any bid view.
If it's invalid in all bid views, it won't emerge as the winning bid in any scenario, and no honest AI Node Provider will endorse it. Given that a bid requires nominations from at least “f+1” AI Node Providers to secure victory in the auction, at least one of those providers must be honest. Consequently, this bid is ineligible for winning.
Guarantee 2 : If there is no issue in the network, an AI Node Provider who sends their bid to all other AI Node Providers in step 1 of the auction will not be penalised in any way
Assuming AI Node Provider 1 run by Mark is an honest one
In step 1 of the auction, Mark’s AI Node Provider sends its bids to all “3f+1” AI Node Providers. Since “2f+1” of them are honest, each honest AI Node Provider will transmit Mark’s AI Node Provider bid within their bid set to all other honest AI Node Providers. Consequently, every honest AI Node Provider will encounter Mark’s AI Node Provider bid in at least “2f+1” of the bid sets they receive, counting their own set.
With a maximum of “3f+1” bid sets available, Mark’s AI Node Provider’s bid will be missing from at most f sets, preventing it from causing a conflict and be penalised.
Guarantee 3 : If there is no issue in the network, An AI Node Provider who sends its bid to “f+1” or more honest AI Node Providers in step 1 of the auction will win the auction if it is the lowest bid sent to any honest AI Node Provider in step 1 of the auction.
Let's assume Mark’s AI Node Provider sends its bid to “f+1” honest AI Node Providers in step 1 of the auction. Each of these honest AI Node Providers then include Mark’s AI Node Provider bid into their bid set, which they send to every other AI Node Provider. As a result of this action, each of the “2f+1” honest AI Node Providers observes Mark’s AI Node Providers bid in “f+1” bid sets, hence assuming Mark’s AI Node Provider bid to be valid in their Bid view.
Assuming Mark’s AI Node Provider’s bid is the lowest in all the bids sent to any honest honest AI Node Providers in the step 1 of the auction, it becomes the winning bid in the bid view of every honest AI Node Provider, who subsequently endorses it as the winning bid.
According to the previously established Guarantee 1, no lower bid can qualify as valid in any bid view, thus ensuring no honest AI Node Provider endorses it. As a result Mark’s AI Node Provider’s nid stands as winning bid with at least “f+1” nominations, emerging as the winner in the auction.
An important point to consider is that there may exist a scenario if 2 AI Node Providers Bid tie and are the lowest, this situation can be resolved through randomisation using threshold decryption process.
Guarantee 4 : Any AI Node Provider must send their bid to at least “f+1” honest AI Node Providers in step 1 of the auction to avoid being penalised.
Let’s consider a case where an AI Node Provider sends their bid to only “k” AI Node Providers, where “k” is less than or equal to “f”, of the honest AI Node Providers in the initial round, which may include themselves.
As a result, this bid will be present in “k” of the bid sets sent by honest AI Node Providers.
Each honest AI Node Provider will receive 2f+1 bid sets from other honest AI Node Providers, including themselves. Among these, only k sets will include the bid in question. Therefore, there will be 2f+1-k sets that do not contain the bid in question. As a result the number of bid sets that do not contain the bid in question (which is 2f+1-k) is greater than the threshold required to trigger a fault (which is f+1) to cause an error and will result in the AI Node provider being penalised.
Guarantee 5 : No Useless Bids or No spamming of useless bids
By virtue of Guarantee 3, any AI Node Provider who sends their bid to “f+1” or more honest AI Node Providers in step 1 of the auction will win the auction if it is the lowest bid that is sent to any honest AI Node Provider in step 1 of the auction, no matter what they do later.
By virtue of Guarantee 4, any AI Node Provider which sends their bid to less than “f+1” AI Node Providers in step 1 will create an error and will be penalised for it.
This ensures while the AI Node Providers do have the optionality to withdraw their bids from the system, but they have to pay for it.
A scenario can occur where it's a genuine case that the AI Node Provider accidentally or due to client fault enters a wrong bid, in those cases the AI Node Providers can contest the penalty in front of a DAO like system.
System Advantages and Trade Offs in Comparison
Each proposed system comes with its own set of pros and cons, when compared to other possible solutions at hand. This system prioritises trustless property while compromising on speed due to the peer-2-peer nature of the auction mechanism.
Pros:
System is completely trustless and does not require any trust among the participating entities explicitly for its functioning.
System maximised social welfare i.e ensures users get matched by the best possible AI Node Providers in terms of user’s price preference and other parameters.
System ensures no AI Node Provider has an advantage over other by waiting to send their bids in the last
The system ensures AI Node Providers do not skew the auction mechanism by sending useless lowest bids and later withdrawing them. This ensure the system adheres to the laws of game theory
Cons:
Due to the peer-2-peer nature of the bid exchange procedure, the system is slow as compared to other systems which take into consideration some trust assumption among participating AI Node Providers.
Due to the exchange of bid sets and bid views of each participant, the system does pose considerable bandwidth consumption when number of participating Node Providers increase in the system which may further slow the auction mechanism.
Reference
Last updated