Nebulai Doc
WebsiteXTelegramLink3
  • About Nebulai
    • What is Nebulai
    • Nebulai Framework
  • OpenCompute
    • User
    • Worker
    • Verifier
    • Central
  • Requests
    • Create Requests
    • Resolve Requests
  • Agent Hub
    • Agent Repository
    • Agent Settings
  • Agent Space
  • Tokenomics
    • Roadmap
  • Links
Powered by GitBook
On this page
  • 🎯 Target Users & Use Cases
  • 🧱 System Architecture
  • ⚙️ Technical Design
  • 🔐 Trust, Security, and Reliability
  • ✅ Summary

OpenCompute

Decentralized Compute Infrastructure for AI and Beyond

PreviousNebulai FrameworkNextUser

Last updated 12 days ago

OpenCompute is a decentralized compute framework built to reshape the future of permissionless infrastructure. It aggregates global idle computing power through browser-based zero-install nodes and forms a distributed, verifiable execution layer optimized for AI and beyond.

It natively supports:

  • AI inference and training

  • Numerical computation

  • Graphical rendering

  • Privacy-preserving computing (e.g. ZK & MPC)

By combining sandboxed remote execution, dynamic trust verification, and decentralized scheduling, OpenCompute offers a scalable, resilient, and censorship-resistant alternative to traditional cloud computing.


🎯 Target Users & Use Cases

👩‍💻 Target Users

  • AI Agent Developers Seamlessly deploy and monetize inference workloads without managing infrastructure.

  • AI Startups / Data Companies Access affordable, elastic GPU resources on demand without cloud lock-in.

  • Enterprise Clients / Web3 Protocols Integrate AI-powered services like intelligent assistants, autonomous research, or smart oracles into dApps or enterprise stacks.

  • Creators / Tool Platforms Access compute for rendering, speech processing, or creative workloads via Telegram bots, web widgets, or lightweight SDKs.

🔧 Core Use Cases

  • AI Training & Inference Distributed execution of ML workloads across decentralized CPU/GPU clusters.

  • Numerical Computation Run large-scale simulations such as matrix operations, PDE solving, or fluid dynamics.

  • Graphics Rendering Enable high-performance ray tracing and batch rendering for visual media or digital twins.

  • Cryptographic Computing Power ZK-Proofs, multiparty computation (MPC), and privacy-enhancing protocols.

  • General Parallel Jobs Execute high-load, concurrent, and long-duration computing tasks.


🧱 System Architecture

👥 Roles in the Network

  • User Uses the Python SDK to define and trigger remote functions ("runes") as if calling a local async method.

  • Worker Runs in the browser—no installation required. Contributes CPU/GPU power in a secure sandboxed environment.

  • Verifier Runs a Dockerized node that manages and validates a cluster of Workers. May be sold or delegated as a high-trust compute provider.

  • Central Acts as the discovery and coordination layer. Handles identity verification, task scheduling, and worker registration.


⚙️ Technical Design

💡 Execution Engine

  • Built using Rust + Rune + WebAssembly.

  • Rune is a lightweight embedded scripting language for real-time compilation and execution.

  • Code is sandboxed in-browser and encrypted using login-derived symmetric keys.

  • Workers execute remote tasks without accessing user data or logic.

🌐 Network & Scheduling

  • Two operational modes:

    • Centralized API + Reverse Tunnels for easy onboarding (no public IP needed)

    • libp2p-based Peer Discovery for fully decentralized P2P network (optional)

  • Central assigns port mappings and coordinates Worker–Verifier communication.

📈 Scalability & Parallelism

  • Turing-complete and designed for multi-task concurrency.

  • Tasks are assigned from a dynamic queue based on worker availability and trust score.

if nowPendingWorkerNum >= 3 {
        useWorkerNum := nowPendingWorkerNum / 3

        for i := 0; i < useWorkerNum; i++ {
                ... // query worker from pending queue
                go WorkerRunCode(worker, runCodeInfo)
        }
} else{
    ... // query worker from pending queue
    go WorkerRunCode(worker, runCodeInfo)
}

🔐 Trust, Security, and Reliability

🔁 Dynamic Trust System

  • The first Worker to return a valid result is credited; others verify consistency.

  • Trust scores are updated based on:

    • Accuracy vs consensus

    • Speed and completion ratio

  • High-trust Workers are prioritized in future scheduling.

🔄 Resilience & Reallocation

  • Task execution time is logged per Worker.

  • If a task exceeds the 1.5× average execution time, new Workers are dispatched to ensure completion.

  • The first correct result is accepted; others are used for consensus voting and fraud detection.

// Task ongoing
if firstFinishTaskWorker{
    finishWork(workerUid)
    return taskResult
}else{
    finishWork(workerUid)
    if taskResult == fristReturnEesult{
        taskTrusty[taskId][fristFinishWorker]++
    }else{
        taskTrustless[taskId][fristFinishWorker]++
    }
}

// All task finish
for taskId, workerUid := range taskTrusty{
    if taskTrusty[taskId][workerUid] > taskTrustless[taskId][fristFinishWorker]{
        globalTrusty[workerUid]++
    }else{
        globalTrusty[workerUid]--
    }
}

✅ Summary

OpenCompute replaces the rent-seeking cloud model with a fair, decentralized, verifiable compute network. It serves as the infrastructure layer powering Nebulai’s multi-agent AI ecosystem and broader Web3 applications.

By democratizing compute access and embedding trust into its architecture, OpenCompute empowers developers, node operators, and users to collaboratively build the future of decentralized intelligence.