OpenCompute
Decentralized Compute Infrastructure for AI and Beyond

OpenCompute is a decentralized compute framework built to reshape the future of permissionless infrastructure. It aggregates global idle computing power through browser-based zero-install nodes and forms a distributed, verifiable execution layer optimized for AI and beyond.
It natively supports:
AI inference and training
Numerical computation
Graphical rendering
Privacy-preserving computing (e.g. ZK & MPC)
💎 Key Innovations - Zero-Setup, Scalable, and Developer-Friendly
OpenCompute delivers a seamless and radically simplified compute onboarding experience. Unlike traditional decentralized compute networks that require complex hardware configurations and software setups, Nebulai introduces a breakthrough: browser-native compute activation. No installations. No environment tuning. Just sign in, and your device becomes part of a global distributed compute network.
This zero-barrier access transforms everyday devices into usable compute nodes—instantly and elastically—becoming a foundational pillar for Nebulai’s scalable compute infrastructure.
From the developer’s perspective, the Nebulai SDK offers true plug-and-play usability. With just a few lines of Python—no complex dependencies or migrations—users can seamlessly trigger powerful remote compute tasks.
We’re also launching remote GPU scheduling, designed with a minimalist interface. For example:
RemoteGPUAdd(1, 2)
This is high-performance computing distilled into a single line of code—delivering real-time responsiveness, on-demand scalability, and massive productivity gains for AI, science, and engineering workloads.
Nebulai redefines compute usability: from “setup-intensive” to “instant-ready.”
🎯 Target Users & Use Cases
👩💻 Target Users
AI Agent Developers Seamlessly deploy and monetize inference workloads without managing infrastructure.
AI Startups / Data Companies Access affordable, elastic GPU resources on demand without cloud lock-in.
Enterprise Clients / Web3 Protocols Integrate AI-powered services like intelligent assistants, autonomous research, or smart oracles into dApps or enterprise stacks.
Creators / Tool Platforms Access compute for rendering, speech processing, or creative workloads via Telegram bots, web widgets, or lightweight SDKs.
🔧 Core Use Cases
AI Training & Inference Distributed execution of ML workloads across decentralized CPU/GPU clusters.
Numerical Computation Run large-scale simulations such as matrix operations, PDE solving, or fluid dynamics.
Graphics Rendering Enable high-performance ray tracing and batch rendering for visual media or digital twins.
Cryptographic Computing Power ZK-Proofs, multiparty computation (MPC), and privacy-enhancing protocols.
General Parallel Jobs Execute high-load, concurrent, and long-duration computing tasks.
🧱 System Architecture
👥 Roles in the Network
User Uses the Python SDK to define and trigger remote functions ("runes") as if calling a local async method.
Worker Runs in the browser—no installation required. Contributes CPU/GPU power in a secure sandboxed environment.
Verifier Runs a Dockerized node that manages and validates a cluster of Workers. May be sold or delegated as a high-trust compute provider.
Central Acts as the discovery and coordination layer. Handles identity verification, task scheduling, and worker registration.
⚙️ Technical Design
💡 Execution Engine
Built using Rust + Rune + WebAssembly.
Rune is a lightweight embedded scripting language for real-time compilation and execution.
Code is sandboxed in-browser and encrypted using login-derived symmetric keys.
Workers execute remote tasks without accessing user data or logic.
🌐 Network & Scheduling
Two operational modes:
Centralized API + Reverse Tunnels for easy onboarding (no public IP needed)
libp2p-based Peer Discovery for fully decentralized P2P network (optional)
Central assigns port mappings and coordinates Worker–Verifier communication.
📈 Scalability & Parallelism
Turing-complete and designed for multi-task concurrency.
Tasks are assigned from a dynamic queue based on worker availability and trust score.
if nowPendingWorkerNum >= 3 {
useWorkerNum := nowPendingWorkerNum / 3
for i := 0; i < useWorkerNum; i++ {
... // query worker from pending queue
go WorkerRunCode(worker, runCodeInfo)
}
} else{
... // query worker from pending queue
go WorkerRunCode(worker, runCodeInfo)
}
🔐 Trust, Security, and Reliability
🔁 Dynamic Trust System
The first Worker to return a valid result is credited; others verify consistency.
Trust scores are updated based on:
Accuracy vs consensus
Speed and completion ratio
High-trust Workers are prioritized in future scheduling.
🔄 Resilience & Reallocation
Task execution time is logged per Worker.
If a task exceeds the 1.5× average execution time, new Workers are dispatched to ensure completion.
The first correct result is accepted; others are used for consensus voting and fraud detection.
// Task ongoing
if firstFinishTaskWorker{
finishWork(workerUid)
return taskResult
}else{
finishWork(workerUid)
if taskResult == fristReturnEesult{
taskTrusty[taskId][fristFinishWorker]++
}else{
taskTrustless[taskId][fristFinishWorker]++
}
}
// All task finish
for taskId, workerUid := range taskTrusty{
if taskTrusty[taskId][workerUid] > taskTrustless[taskId][fristFinishWorker]{
globalTrusty[workerUid]++
}else{
globalTrusty[workerUid]--
}
}
✅ Summary
OpenCompute replaces the rent-seeking cloud model with a fair, decentralized, verifiable compute network. It serves as the infrastructure layer powering Nebulai’s multi-agent AI ecosystem and broader Web3 applications.
By democratizing compute access and embedding trust into its architecture, OpenCompute empowers developers, node operators, and users to collaboratively build the future of decentralized intelligence.
Last updated